Charity Feedback from 2022 Charity Evaluations

Link post

Introduction

In December 2022, ACE sent a survey to the 12 charities that participated in our 2022 charity evaluation process to gather their opinions on how it went. The primary goal was to understand shortcomings in the accuracy of our methodology so that we can better fulfill our goal of recommending animal advocacy organizations that can do the most good with additional donations. Our secondary goal was to identify ways we can enhance our evaluation process to make it more worthwhile for charities to participate, apart from the exposure and financial incentive that comes from being recommended by ACE. Although we prioritize being accurate in our assessments, we also want to avoid unnecessary burdens on charities’ time and make the evaluation process valuable and insightful.

We are publishing these results for anyone who is interested in how ACE’s charity evaluation process is perceived by participating charities. We also hope that it will give charities we invite to be evaluated in the future a better understanding of what to expect.

Potential Biases

All charities responded to each question in our survey, despite most questions being optional. Charities were given the option to provide anonymous feedback, but none elected to do so this year.

Five of the 12 charities we evaluated in 2022 received a recommendation. We were not surprised that these five charities generally gave more positive feedback than the other seven, given that the survey was conducted after charities received their recommendation status. We recognize that charities wanting to be re-evaluated may have felt they should be positive and constructive in their responses because the survey was not anonymous.

Results and Interpretation

We summarize the results of our survey in this section. For those who prefer to see exact details and verbatim comments from charities, we recommend reading with the detailed results spreadsheet open.

Questions were divided into two sections: one about the process of working with ACE and another about the review and methodology that went into it.

Process

Charities spent a wide range of staff time on the evaluation process—which spanned from June to November—from 15 hours for the lowest charity, 100 hours for the median charity, and 850 hours for the highest charity. Four charities, primarily larger ones, reported the time investment as higher than expected. Our team is aware that we require more time to fully understand the work of larger organizations, but we also note that they have more staff to share the workload between.

Overall, 10 out of 12 charities were satisfied with the evaluation process. Generally, charities felt like they had an adequate amount of time to respond to ACE’s requests. Almost all of the feedback about ACE’s communication style (46 out of 48 ratings) was positive.

The stages of the evaluation process with the most negative ratings (two or three charities were dissatisfied or highly dissatisfied) were:

  • The culture survey, which goes out to all charity employees and sometimes volunteers

  • The follow-up questions, which we send to charities after they respond to our initial general information request questions

  • Giving feedback and approval on the review

We elaborate on charities’ issues with the culture survey in the next section. The other two stages of our evaluation process (follow-up questions and giving feedback and approval) require the most back-and-forth discussion; we will be making adjustments to make these processes easier, such as providing more opportunities for live conversations and reducing the number of channels we use to solicit feedback.

Review and methodology

The overall quality of ACE’s content (writing and graphics) was rated highly: 11 out of 12 charities rated it as good or better. [1] Additionally, we asked charities to separately rate the methodology of each of our four evaluation criteria and the accuracy of our judgments on each criterion. [2] Results were correlated between the two questions, but charities generally rated the former more highly and were more polarized in rating the latter.

Cost-Effectiveness received the lowest ratings on both methodology and accuracy, with the median rating for each being good. Our sense is that in trying to implement a method that accommodates different types of animal advocacy interventions, we lose granular detail. This makes it difficult to make precise estimates, but one of our objectives for future evaluations is to categorize and express uncertainty as clearly and transparently as possible.

Leadership and Culture and Room for More Funding (RFMF) were rated higher than Cost-Effectiveness, with both criteria receiving a median rating of very good for methodology and good for accuracy.

For Leadership and Culture, some charities reported frustration with ACE’s commitment not to investigate claims made during our culture survey. Although we shifted our approach to focus on determining whether we have concerns that are substantial enough to affect our confidence in a charity’s effectiveness and stability (rather than attempting to assess the strength of their leadership and culture, as we have in the past), we understand that this may lead to frustration for charities that want to make things right. However, we believe that our culture survey and our Leadership and Culture assessment are insightful even though we do not investigate claims.

For RFMF, one charity commented that the question about how they would use unexpected funding was unintuitive to answer, and another felt that receiving lower scores reduce funding prospects that allow charities to scale. Currently, ACE primarily uses this criterion to decide whether a recommended charity should be considered for Top Charity, based on whether we think they can absorb and effectively utilize the funding that a new or renewed recommendation is expected to bring in.

Feedback on the Programs criterion was mostly positive, with the median charity rating methodology as very good and accuracy as good to very good.

Finally, charities generally found participating in the evaluation process to be useful, regardless of their recommendation status: five charities found it very useful or extremely useful, four found it somewhat useful, and three found it not so useful or not useful at all. Charities mentioned the following changes as a result of their evaluation:

  • Five mentioned that they now have a higher focus on data collection, metrics tracking, or effectiveness

  • Five mentioned their evaluation impacted their programmatic goals, such as validating their plans to expand particular programs

  • Four mentioned that they plan to become more transparent or implement internal policy changes

We are grateful to all the charities for spending time on this survey, and we have followed up with them as needed for clarification about their comments. Overall, we consider the responses to be positive and feel validated about the changes we are working on, which will be summarized in the blog post(s) we plan to publish next month.

  1. ^

    The rating options were Poor, Fair, Good, Very Good, and Excellent.

  2. ^

    The rating options were Poor, Fair, Good, Very Good, Excellent, and No Opinion.