We have been conducting annual charity evaluations since 2014. Throughout this time, our goal has remained the same: to find and promote the most effective animal charities. Our seven evaluation criteria have also remained broadly consistent (though we’ve reworded, reordered, and tweaked them over the years). Our process for evaluating charities, however, continues to develop each year. For instance, in 2017, we began having conversations with employees at each charity and began offering charities small grants for participating in our evaluations. In 2018, we began conducting culture surveys at each charity, added new dimensions to some of our criteria, and made some logistical changes to increase our efficiency.
This year, we are making a number of changes, including:
Publishing Overall Ratings of Each Charity on Each Criterion
To allow our readers to quickly form a general idea of how organizations perform on our seven evaluation criteria, this year we have included overall ratings of each charity on each criterion in our reviews. This decision is also based on feedback from some readers who have told us that after skimming through the reviews, it was not clear enough how charities were performing on each criterion. We believe the ratings we produced can give a better sense of our overall assessment as they are visual representations of charities’ performance (weak, average, or strong) on each criterion, relative to other charities under review, as well as our confidence level (low, moderate, or high) in every case. We hope these ratings make it easier for our audience to compare charities’ performance by criterion and help us better express how confident we feel about our appraisal depending on the available evidence.
Increasing the Number of Visual Aids (e.g., Charts, Tables, and Images) in Each Review
This year, in order to make our charity evaluations more accessible to a wider audience, we have made an effort to represent more information visually rather than as blocks of text. In addition to the ratings described above, we added tables representing charities’ main programs, key results, estimated future expenses, and our assessment of their track records. We also added a table representing each charity’s human resources policies with different color marks indicating the policies they have; the ones they do not; and the ones for which they have a partial policy, an informal or unwritten policy, or a policy that is not fully or consistently implemented. We think that these changes will make it easier for our audience to gather the most essential findings from our reviews in a quicker and more efficient manner.
Making Changes to our Cost-Effectiveness Models
Since 2014, we have been creating quantitative cost-effectiveness models that compare a charity’s outcomes to their expenditures for each of their programs and attempt to estimate the number of animals spared per dollar spent. As these models have developed each year, we found that some repeated issues emerged:
We were only able to model short-term, direct impact. Our attempts to model medium/long-term or indirect effects of interventions were too speculative to be useful. This meant that we could not produce models at all for charities that were focused mostly on medium/long-term or indirect outcomes, and we often had to omit programs from the charities for which we did produce models.
The estimates produced by the models were too broad to be useful for making recommendation decisions. Ultimately, we want each criterion to support our recommendation decisions, and we found that we often were not confident enough in the models to give them weight in those decisions.
While we appreciate the value of using numbers to communicate our estimates and uncertainty, we found that by using numbers, our estimates were often misinterpreted as being more certain than we intended.
The variation in cost-effectiveness between charities was more dependent on which interventions the charity used, rather than how they were implemented. This suggests that, rather than modeling the cost-effectiveness of each charity, we would be better served to model the average cost-effectiveness of each intervention and incorporate that into our discussion of effectiveness in Criterion 1.
Addressing these issues fully was not something we could resolve in a single review cycle, but we have taken significant steps to provide an assessment of cost-effectiveness that is more useful. We have moved away from using a fully quantitative model and instead transitioned to a qualitative approach that, for each intervention type, compares the resources used and outcomes achieved across all the charities being reviewed. In the discussion, we have also included aspects of each charity’s specific implementation of their interventions that seem likely to have influenced their cost-effectiveness, either positively or negatively.
This approach does have limitations, as focusing on qualitative comparisons can lead us to be overly confident in our assessment. As such, we have highlighted where this approach may not work and we have continued to put limited weight on this criterion as a whole when making decisions. That said, it has provided some insight into the cost-effectiveness of all reviewed charities regardless of the timescale or directness of their work, allowing us to make comparisons that we were previously unable to make. We have focused on comparisons within interventions and not between interventions so as not to overlap with Criterion 1, as well as to provide an insight into how cost effective the charity might be in implementing new programs in the future.
We welcome feedback on this approach, which can be directed to Jamie Spurgeon.
Making our Culture Surveys Mandatory for Charities Receiving a Recommendation
Our evaluations of each charity’s culture have evolved every year. In 2016, we simply asked each organization’s leadership about the health of their organization’s culture. In 2017, we began reaching out to two randomly selected staff members at each charity to corroborate leaderships’ claims. In 2018, we introduced culture surveys to our evaluation process and we distributed our surveys to each charity’s staff, with the agreement of their leadership. In some cases, a charity’s leadership preferred to send us the results of their internal surveys instead, which we accepted in 2018 as well.
We found that distributing our own culture survey to each charity under evaluation gave us a much fuller picture of the charity’s culture. We also found that distributing the same culture survey to every organization was essential, since charities’ internal surveys vary widely in content, relevance, and quality.
This year, we decided to make participation in our culture survey an eligibility requirement for receiving a recommendation from ACE. Our goal is not to uncover and report any small conflict or cultural problem at the charities we evaluate; rather, we only report general trends that bear upon the charity’s effectiveness. We view the distribution of our culture surveys as essential due diligence since we seek to promote charities that will contribute to the long-term health and sustainability of the animal advocacy movement.
Watch our blog for a forthcoming post with more information about our culture survey.
Hiring a Fact-Checker
ACE places high priority on using accurate and reliable evidence in our work. In order to improve our capacity to more deeply investigate empirical information, we have hired a Field Research Associate whose main role is to identify and verify the factual statements included in our research. These statements include claims made by charities under evaluation. We hope that this additional staff member will improve ACE’s decision-making by allowing us to better verify the information reported to us.
Some updates to our charity evaluation process: 2019
Link post
We have been conducting annual charity evaluations since 2014. Throughout this time, our goal has remained the same: to find and promote the most effective animal charities. Our seven evaluation criteria have also remained broadly consistent (though we’ve reworded, reordered, and tweaked them over the years). Our process for evaluating charities, however, continues to develop each year. For instance, in 2017, we began having conversations with employees at each charity and began offering charities small grants for participating in our evaluations. In 2018, we began conducting culture surveys at each charity, added new dimensions to some of our criteria, and made some logistical changes to increase our efficiency.
This year, we are making a number of changes, including:
Publishing overall ratings of each charity on each criterion
Increasing the number of visual aids (e.g., charts, tables, and images) in each review
Making changes to our cost-effectiveness models
Making our culture survey mandatory for charities receiving a recommendation
Hiring a fact-checker
Publishing Overall Ratings of Each Charity on Each Criterion
To allow our readers to quickly form a general idea of how organizations perform on our seven evaluation criteria, this year we have included overall ratings of each charity on each criterion in our reviews. This decision is also based on feedback from some readers who have told us that after skimming through the reviews, it was not clear enough how charities were performing on each criterion. We believe the ratings we produced can give a better sense of our overall assessment as they are visual representations of charities’ performance (weak, average, or strong) on each criterion, relative to other charities under review, as well as our confidence level (low, moderate, or high) in every case. We hope these ratings make it easier for our audience to compare charities’ performance by criterion and help us better express how confident we feel about our appraisal depending on the available evidence.
Increasing the Number of Visual Aids (e.g., Charts, Tables, and Images) in Each Review
This year, in order to make our charity evaluations more accessible to a wider audience, we have made an effort to represent more information visually rather than as blocks of text. In addition to the ratings described above, we added tables representing charities’ main programs, key results, estimated future expenses, and our assessment of their track records. We also added a table representing each charity’s human resources policies with different color marks indicating the policies they have; the ones they do not; and the ones for which they have a partial policy, an informal or unwritten policy, or a policy that is not fully or consistently implemented. We think that these changes will make it easier for our audience to gather the most essential findings from our reviews in a quicker and more efficient manner.
Making Changes to our Cost-Effectiveness Models
Since 2014, we have been creating quantitative cost-effectiveness models that compare a charity’s outcomes to their expenditures for each of their programs and attempt to estimate the number of animals spared per dollar spent. As these models have developed each year, we found that some repeated issues emerged:
We were only able to model short-term, direct impact. Our attempts to model medium/long-term or indirect effects of interventions were too speculative to be useful. This meant that we could not produce models at all for charities that were focused mostly on medium/long-term or indirect outcomes, and we often had to omit programs from the charities for which we did produce models.
The estimates produced by the models were too broad to be useful for making recommendation decisions. Ultimately, we want each criterion to support our recommendation decisions, and we found that we often were not confident enough in the models to give them weight in those decisions.
While we appreciate the value of using numbers to communicate our estimates and uncertainty, we found that by using numbers, our estimates were often misinterpreted as being more certain than we intended.
The variation in cost-effectiveness between charities was more dependent on which interventions the charity used, rather than how they were implemented. This suggests that, rather than modeling the cost-effectiveness of each charity, we would be better served to model the average cost-effectiveness of each intervention and incorporate that into our discussion of effectiveness in Criterion 1.
Addressing these issues fully was not something we could resolve in a single review cycle, but we have taken significant steps to provide an assessment of cost-effectiveness that is more useful. We have moved away from using a fully quantitative model and instead transitioned to a qualitative approach that, for each intervention type, compares the resources used and outcomes achieved across all the charities being reviewed. In the discussion, we have also included aspects of each charity’s specific implementation of their interventions that seem likely to have influenced their cost-effectiveness, either positively or negatively.
This approach does have limitations, as focusing on qualitative comparisons can lead us to be overly confident in our assessment. As such, we have highlighted where this approach may not work and we have continued to put limited weight on this criterion as a whole when making decisions. That said, it has provided some insight into the cost-effectiveness of all reviewed charities regardless of the timescale or directness of their work, allowing us to make comparisons that we were previously unable to make. We have focused on comparisons within interventions and not between interventions so as not to overlap with Criterion 1, as well as to provide an insight into how cost effective the charity might be in implementing new programs in the future.
We welcome feedback on this approach, which can be directed to Jamie Spurgeon.
Making our Culture Surveys Mandatory for Charities Receiving a Recommendation
Our evaluations of each charity’s culture have evolved every year. In 2016, we simply asked each organization’s leadership about the health of their organization’s culture. In 2017, we began reaching out to two randomly selected staff members at each charity to corroborate leaderships’ claims. In 2018, we introduced culture surveys to our evaluation process and we distributed our surveys to each charity’s staff, with the agreement of their leadership. In some cases, a charity’s leadership preferred to send us the results of their internal surveys instead, which we accepted in 2018 as well.
We found that distributing our own culture survey to each charity under evaluation gave us a much fuller picture of the charity’s culture. We also found that distributing the same culture survey to every organization was essential, since charities’ internal surveys vary widely in content, relevance, and quality.
This year, we decided to make participation in our culture survey an eligibility requirement for receiving a recommendation from ACE. Our goal is not to uncover and report any small conflict or cultural problem at the charities we evaluate; rather, we only report general trends that bear upon the charity’s effectiveness. We view the distribution of our culture surveys as essential due diligence since we seek to promote charities that will contribute to the long-term health and sustainability of the animal advocacy movement.
Watch our blog for a forthcoming post with more information about our culture survey.
Hiring a Fact-Checker
ACE places high priority on using accurate and reliable evidence in our work. In order to improve our capacity to more deeply investigate empirical information, we have hired a Field Research Associate whose main role is to identify and verify the factual statements included in our research. These statements include claims made by charities under evaluation. We hope that this additional staff member will improve ACE’s decision-making by allowing us to better verify the information reported to us.