ACE’s Response to John Halstead
Hello! I’m Toni, ACE’s new director of research. I’ve worked in ACE’s research department for two and a half years, but I just stepped into the director role on July 31.
On behalf of ACE, I’d like to thank John Halstead for engaging so thoughtfully with our work and for his dedication to improving the field of animal advocacy research. We value honest feedback, which is precisely why, several months ago, we invited Mr. Halstead and six other individuals to act as consultants during our charity evaluation process this year. I’d also like to note my appreciation that Mr. Halstead shared his post with us prior to publishing it, which gave us the opportunity to consider his points and to draft this response for simultaneous publication.
ACE would like to take this opportunity for a public exchange about our work to:
explain our position on our older intervention research,
clarify the relationship between our cost-effectiveness estimates (CEEs) and our “all things considered” point of view,
clarify the relationship between our intervention reports and our charity reviews, and
outline some of our research priorities for next year.
My goal in this piece is not to address every point that Mr. Halstead raises, but rather to address what I believe is the underlying issue. It seems to me that Mr. Halstead’s critique is largely based on: (i) a fundamental disagreement with ACE about the role that our CEEs should play in our decisions, and (ii) a misunderstanding about the role that our CEEs do play in our decisions. While we believe that our CEEs should (and do) play a very small role in our decisions, Mr. Halstead seems to believe that our CEEs should (and do) play a central role in our decisions. As a result, Mr. Halstead understandably has a very different idea than we do about which areas of our research we should prioritize.
Our Position on our Older Intervention Research
We are aware of some limitations of our older intervention research, and we agree with Mr. Halstead that ACE’s 2014-2016 reports on corporate outreach, undercover investigations, humane education, and online ads are not up to our current standards. We recognize, for instance, that our previous use of “pessimistic,” “realistic,” and “optimistic” labels for our quantitative estimates was not ideal, and we did not sufficiently explain how we made or used those estimates. (In early 2017, we replaced our labeling strategy with the use of 90% subjective confidence intervals.) We are also aware that new research has become available since the publication of some of our older intervention reports. As such, they are in need of updating.
One question that’s been on my mind since I became ACE’s director of research (about five weeks ago) is whether or not we should archive some of our older intervention reports until we are able to update them. Since ACE’s research department is currently hard at work on our charity reviews, my initial plan was to wait until our reviews are published in November before making any major decisions about our intervention work. However, Mr. Halstead’s post has led us to consider the value of our older reports sooner than planned, and we’ve expedited our decision. We will be archiving our corporate outreach, undercover investigation, humane education, and online ads reports on September 14. This will allow interested readers to reference them easily for the next week—though even when they are archived, the reports will remain available on our site via the search tool.
We were glad to learn that Mr. Halstead agrees that our 2017 leafleting intervention report was of good standard, and I’d add that our 2017 protest intervention report is of a similar standard. These are our two most recent intervention reports, and they both utilize the new intervention research methodology that we officially introduced last November. Our new methodology includes a more systematic literature search, facilitates more transparent communication about our reasoning, and allows for more rigorous statistical analysis, when appropriate. We will ensure that all of our future intervention work is as rigorous as our leafleting and protest reports, if not more so.
The Role of Cost-Effectiveness Estimates in our Work
Mr. Halstead is correct that we have multiple platforms in which we present our views. He divides these platforms into three categories: (i) intervention reports, (ii) CEEs, and (iii) the “all things considered” views expressed in our charity reviews. In fact—in our intervention reports as well as in our charity reviews—we present (i) CEEs and (ii) “all things considered” views.
Mr. Halstead is also correct that the views expressed on these different platforms sometimes differ. “For example,” he writes, “the view expressed in the intervention report on investigations is different to the view expressed in the cost-effectiveness analyses of investigations.” Actually, I’d suggest that the views expressed in our intervention reports are often quite different from the views expressed in our charity reviews. Additionally—in both our intervention reports and our charity reviews—our cost-effectiveness estimates are often quite different from our overall (or “all things considered”) views. These differences are intentional and justified, as I’ll explain below.
The Relationship Between the Cost-Effectiveness Estimates and the Overall Views Expressed in our Intervention Reports
Our overall views of an intervention are informed by a number of factors other than our best estimate of the average cost-effectiveness of that intervention. After all, practical decisions about whether or not to devote further resources to an intervention should be made based on the intervention’s marginal cost-effectiveness, and our CEEs estimate the intervention’s average cost-effectiveness.
In order to develop a sense of an intervention’s marginal cost-effectiveness, we consider how its average cost-effectiveness might change over time depending on the amount of resources invested in it, its interactions with other interventions, shifts in public opinion or political context, and so on. Even if we believe an intervention is currently highly cost-effective, we might think that investing further in it would have diminishing returns. Similarly, we consider whether an intervention might be necessary for the success of the animal advocacy movement. If so, we may recommend investing further in that intervention even if it doesn’t currently seem to be accomplishing many tangible benefits. And of course, there are always costs and benefits of interventions that we simply don’t include in our CEEs because they can’t be quantified with any helpful degree of precision, though we discuss such costs and benefits elsewhere in our reports and they do factor into our overall views.
Mr. Halstead repeatedly claims that our “all things considered” view is that most forms of grassroots advocacy have “close to zero effect.” That’s not our view, and it’s also not how we think about whether or not to recommend interventions. As quoted, we wrote in our THL review that “there is little evidence available” for the effects of the grassroots outreach that THL conducts, such as “leafleting, online ads, and education.” We also wrote that we “do not currently recommend the use of leafleting or online ads as we suspect that they are not as effective as some other means of public outreach.” It does not follow from these claims that our overall view is that grassroots advocacy has close to no effect. As we explain in the preceding paragraph of THL’s review, “we still think it’s important for the animal movement to target some outreach toward individuals, as a shift in public attitudes could lead to greater support for new animal-friendly policies. Public outreach might even be a necessary precursor to achieving institutional change.” In other words, because of the possible necessity of grassroots outreach and because of its interactions with other interventions, our overall view of grassroots outreach is distinct from our CEEs for leafleting, online ads, and humane education.
The Relationship Between the Cost-Effectiveness Estimates and Overall Views Expressed in our Charity Reviews
The CEEs included in our charity reviews are very rough estimates of a charity’s average cost-effectiveness. We emphasize strongly (with bold letters) that they should not be taken as our overall view of a charity’s effectiveness. Our overall view of each charity is informed by all seven of our criteria.
Mr. Halstead seems to believe that our CEEs are the most important factor in our recommendation decisions. When we reminded him in conversation about our six other criteria, he argued that we wouldn’t value factors like strong leadership or track record in a charity that wasn’t cost-effective, and that therefore our CEEs must play a central role in our recommendation decisions. That seems like a fair assumption to make about an effective altruist organization. Once again, though, our CEEs are estimates of the average cost-effectiveness of the charity over the past year, and we make our recommendation decisions based on our beliefs about the marginal cost-effectiveness of each charity. We consider all seven of our criteria to be largely independent indications of marginal cost-effectiveness, as I’ll explain below.
Suppose we learn that the director of a charity we’re evaluating has been embezzling money (though this has never happened). Even if we believe the charity has a high average cost-effectiveness in its work for animals, we might believe that donations to the charity have low marginal cost-effectiveness because the charity is about to lose its director to prison. Therefore, we consider strong leadership to be an indication of a charity’s marginal cost-effectiveness independently of the charity’s average cost-effectiveness. Similarly, suppose we review a charity’s track record and find that it accomplishes more every year on the same budget. Even if its average cost-effectiveness is currently low, we might be optimistic about the marginal cost-effectiveness of donations to that charity. Therefore, we consider track record to be an indication of a charity’s marginal cost-effectiveness independently of the charity’s average cost-effectiveness.
Mr. Halstead highlights some problems with our older intervention research and concludes that: “consequently ACE’s research does not provide much reason to believe that their recommended charities actually improve animal welfare.” If he had said that our cost-effectiveness estimates (on their own) don’t provide much reason to believe that our recommended charities actually improve animal welfare, I might have agreed with him. However, as we explain in our reviews, we put limited weight on those estimates. Our research provides many reasons to believe that our recommended charities help animals. Those reasons can be found in all seven sections of our reviews.
The Relationship Between our Intervention Research and our Charity Reviews
Mr. Halstead notes some apparent inconsistencies between our intervention research and our charity reviews. For example, he points out that: “[ACE’s] view as of August 2018 is that grassroots advocacy has close to no effect, though ACE does estimate that THL’s online outreach is beneficial.” As mentioned, we feel this is a misrepresentation of our overall view of grassroots outreach. However, the point I’d like to make now is that there are sometimes good reasons why our overall view of an intervention might differ from our assessment of that intervention as it is implemented by a particular charity.
Any given intervention can vary widely in its cost-effectiveness depending on how it is implemented. When we model the cost-effectiveness of an intervention, we have to make certain assumptions about how that intervention is implemented. For example, in our protest report, we modeled the cost-effectiveness of the types of protests implemented by THL. If we were writing a charity review for a group like Anonymous for the Voiceless, which uses a very different kind of protest, our model of the cost-effectiveness of their protests might look very different from the model in our protest report.
Readers may be wondering: what is the value of our intervention reports if they aren’t necessarily the basis for the CEEs in our charity reviews? Our answer is two-fold. First, our overall views of each intervention do play some role in our reviews, particularly in Criterion 2 (“Does the charity engage in programs that seem likely to be highly impactful?”). Second, our new methodology for our intervention research was designed to make our intervention reports useful in other ways. For instance, we examine factors that might make an intervention more or less cost-effective, which we hope will be of use to other charities.
Further Thoughts on the Role of Cost-Effectiveness Estimates in our Charity Reviews
A discussion of the role of our CEEs in our charity reviews may appear to be tangential to the points that Mr. Halstead has raised. Because Mr. Halstead believes that our CEEs both should be and are the most important factor in our recommendation decisions, he may not perceive there to be any problem with the role that our CEEs play in our reviews and therefore didn’t discuss it in his post. However, the role of our CEEs in our reviews is actually a key point in this exchange. It is the implicit assumption that allows Mr. Halstead to infer from the flaws in our corporate outreach report that our charity evaluation research “does not provide much reason to believe that their recommended charities actually improve animal welfare.”
Mr. Halstead’s chain of reasoning seems to be:
ACE’s corporate outreach intervention report is flawed.
Corporate outreach accounts for 90% of THL’s and Animal Equality’s CEEs.
Each charity’s CEE is the primary piece of evidence that the charity improves animal welfare.
4. ACE does not provide much reason to believe that their recommended charities improve animal welfare.
Even if we grant (1) and (2), we’ve now explained that premise (3) is false, and therefore Mr. Halstead’s conclusion does not follow.
Our Plans for ACE’s Research Department
ACE’s specific research plans for next year have not yet been set; we have annual strategic planning sessions in December or January, after our charity reviews are released. However, as ACE’s new director of research, I can make the following public commitments about our future work:
We will archive our outdated intervention reports on September 14, 2018.
After our reviews are published in November (and we therefore have more time), we will consider whether we should add any of the problems raised by Mr. Halstead to our public mistakes page.
Our future intervention reports will meet or exceed the standards set by our protest and leafleting reports.
We will continue working to clearly describe the role that CEEs play in our charity reviews, since this has continually caused confusion among our community.
In our 2018 reviews, we are no longer using our online ads report in our CEEs.
We will make every effort to keep our readers apprised of our most current views on our research.
Updating our corporate outreach report is of particularly high priority for us, and Mr. Halstead is correct that this was also true in 2017. I spent the majority of my time this year working on an updated report, but my priorities shifted when I moved into my new position. When our charity reviews are complete, I will either finish up the report or pass it on to another team member.
We are considering further updating our intervention research methodology by breaking our reports into smaller pieces that can be published individually. That way, we would be able to publish or update some pieces of the reports more quickly than we are currently able to publish or update full reports.
As Mr. Halstead points out, we have not yet conducted much original research on the impact of various welfare reforms. We’ve generally left this work to other groups. However, we are considering doing more welfare research in the future. In fact, we have a report on fish welfare that is currently being copy edited for publication. One of our goals is to better anticipate which welfare reforms charities will pursue each year so that we can research the effects of those reforms independently and before we evaluate the charities, rather than by relying on others’ work while we evaluate charities. We think this will both improve our charity evaluations and allow us to be more useful to charities that are considering which reforms to pursue.
Comments on Some Miscellaneous Claims
“Corporate outreach accounts for the majority of the modelled impact of both charities in the cost-effectiveness analyses of THL and Animal Equality: for THL, ~90% of the modelled impact is from corporate outreach, and ~10% from online outreach; and for Animal Equality, >90% of the modelled impact is from corporate outreach.”
Our team is unsure how Mr. Halstead arrived at these percentages, though we shared with him in conversation that we believe they are incorrect. We assume that when Mr. Halstead refers to the portion of the “modeled impact” of corporate outreach, he means the portion of the charity’s modeled impact that is due to corporate outreach weighted by the portion of the charity’s budget that supports corporate outreach. By our calculations, corporate outreach accounts for about 63% of THL’s 2017 CEE and about 36% of Animal Equality’s 2017 CEE.
“When I asked ACE about this in 2017, they said that the basis for the figure was another page on the impacts of media coverage on meat demand, which is not linked to or referenced at that point on the cost-effectiveness analysis of undercover investigations.”
We should have made this reference much clearer, though for what it’s worth, the page on the impacts of media coverage on meat demand is linked in Section V.3 and the figure Mr. Halstead is referencing appears in Section V.3.1.
“ACE does not have up to date research of sufficient quality on the welfare effects of corporate campaigns.”
It’s true that we have not conducted much original research on the impact of various corporate campaigns on animal welfare. However, we do not just rely on our own research when we estimate the impact of these campaigns. We also rely on relevant research produced by other groups (e.g., The Open Philanthropy Project’s report on the welfare differences between cage and cage-free housing).
“ACE also does not check whether their recommended charities are genuinely causally responsible for the corporate policy successes that they claim,” and “ACE does not check with third party news sources, experts or with the companies themselves on whether the claims of the charities are accurate.”
We do search online for evidence in the news of each charity’s achievements. The problem is: there usually is no such evidence, particularly in the field of corporate outreach. Of course, the absence of evidence of a charity’s involvement in a corporate campaign is not evidence that the charity was not involved. We’ve also looked up corporations’ press releases announcing their commitments, but these generally do not mention animal charities. (As far as I can remember, I’ve never seen one that does.) We have little reason to believe that a corporation could or would share detailed information about their decisions with us if we asked them. I don’t know who Mr. Halstead has in mind when he mentions checking with “experts,” though we’ve certainly spoken with many experts in corporate campaigning, if that is what he means.
We are always looking for ways to better assess charities’ claims. In the meantime, when we can’t corroborate a charity’s claims with a third party, we are careful to state in our reviews that the charity “reports” or “claims” to have achieved X or Y, rather than that they have done so.
Edited 9/11/18: Please see this comment exchange between Avi Norowitz and me for some additional details regarding our evaluation of the extent to which charities cause some corporate policy commitment. I think that the above two paragraphs generally hold, but there are some important cases where it doesn’t. To be clear, there are a number of cases where charities are named in the news associated with a policy commitment, or in corporation’s press release associated with a policy commitment, or in the policy commitment itself.
“My interactions with Ms Adleberg and other members of ACE’s current research staff have been very positive...”
“Evaluating the impact of animal charities is generally more difficult than evaluating the impact of charities carrying out direct health interventions because evidence is sparse and much hinges on difficult questions about animal sentience.”
 We have reviewed the entire contents of his post as a team.
 And we do grant (1), though (2) is false, as I discuss in the final section of this piece.
 In THL’s 2017 CEE, we modeled three of their six programs—corporate outreach, grassroots outreach, and online outreach—which accounts for a total of about 78% of their budget. We weigh each estimate of program cost-effectiveness by the proportion of the charity’ budget spent on that program. Corporate outreach accounted for an estimated 49% of THL’s budget in 2017, which equates to 63% of the programs we modeled, and thus 63% of the total CEE. Following the same reasoning, corporate outreach accounted for just 36% of Animal Equality’s 2017 CEE.