Project Manager at ACE by day, artist by night. Ask me about my grand painting-to-give plans 😆
erikaalonso
I echo Kieran’s points on the difference between EA AWF and ACE Movement Grants. The only other distinguishing factor I’d mention is that because the grant managers and processes differ, the projects that end up being funded tend to have different trends between funds. You can find a list of previous Movement Grant recipients on this page which may give you a better idea of the types of projects funded as well as the size of those grants for each round.
Keeping in mind that website traffic stats are not exact, here are the numbers I have from Animal Charity Evaluators for Jan 01 - Dec 31 2020.
Users: 160,508
Sessions: 214,433
Pageviews: 430,692
The team overseeing recommendation decisions is typically well aligned in making these decisions, and by the time reviews have been drafted for each charity, there are fairly clear levels in the charities’ overall performance. That said, we do of course have disagreements. I think these are more likely to arise when a charity’s status may be changing from the previous year, e.g. an existing charity changing recommendation status or a new charity that might be recommended for the first time. When disagreements arise, we provide as much time as possible to reach an agreement, first in a meeting to understand our points of disagreement, followed up by email discussion if necessary. We then take a final vote to decide. If there is a split decision, the Executive Director makes the final call.
Hi Michael, I’ll respond on behalf of Leah/ACE:
The competition for roles at ACE varies depending on the position. We receive around 10 intern applications every month, so this is by far the most competitive of all positions at ACE. Given our niche area of work, replaceability is a challenge; when we’re looking for someone who has a research background, is interested in effective altruism, and is passionate about animal advocacy, we’re looking at a very narrow group of people.
Speaking more generally, people are replaceable, but the costs—in terms of both time and money—can be very high. It can take months for managers and leadership to recruit, vet, and interview new candidates and up to six months to fully onboard a new hire. During the time we are rehiring for a role, the work assigned to that position is either stagnant or has to be redistributed to other team members who are already operating at capacity. There’s also the opportunity cost for the managers and leadership participating in the process whose time (and corresponding wages) could be better spent on programs and other activities more directly fulfilling our mission.
And while the costs of turnover are high, it’s important to note that the primary reason ACE prioritizes its people is for their inherent value. Each ACE team member is an individual with their own experiences and talents that contribute to achieving our goals. We use job descriptions as guidelines, but welcome the unique insight, ideas, and skills each person brings into their role.
Year after year, we’re able to maintain, and even improve, the quality of evaluations, research, and grant-making for which ACE is internationally known by investing in our people—people with exceptional competencies and the dedication to finding and promoting the most effective ways to help animals.
Thanks for the question, NunoSempere! Could you clarify whether you are referring to ACE specifically? Or the EAA movement as a whole?
ACE’s Room for More Funding
Thank you for pointing this out! There should only be one email address field, I’ll edit the form to fix this error :)
ACE Call for External Reviewers
Minor suggestion: I often share posts from EA Forum on social media, but the posts do not have a default “share image” attached. If you added some metadata identifying a default image for all posts I think that the social shares would get a lot more traction.
Here is a link to the webpage with the map embed. You can also view it directly via this Data Studio dashboard. We weren’t able to parse the outcome types by geographic location, but we are looking at other software we can use. For now, at least, you can see the grant amounts by geographic location. I will keep you posted.
Thanks for clarifying, I had a feeling that is what you meant. I will let you know when the map is published :)
We are still working on the map visualizing grant recipient locations but hope to have it published online within the next few weeks. As for the outcome types, does this bar chart illustrate what you’re looking for?
32 influencing public opinion
43 capacity-building
9 influencing industry
17 building alliances
9 influencing policy and law
Note that some of the 49 grants fall under more than one outcome. We have more info about how we define these outcomes here.
I actively enjoy managing people, although it has taken about a decade of experience to get to this point. I’m also good at it, which I think is very important as well.
ACE’s Spring 2019 Effective Animal Advocacy Fund Grants
Our cost-effectiveness estimates are for the relatively short-term, direct impact of each charity. They are estimates of the average cost-effectiveness of a charity over the last year. If the majority of a charity’s programs (by budget) are indirect and/or long-term in their outcomes, we’ve found that our cost-effectiveness estimates for that charity are too uncertain to be useful. (We would not publish a cost-effectiveness estimate for only some of their programs, so as not to risk that estimate being taken as an estimate of the cost-effectiveness of the charities activities as a whole.) This was the case with ProVeg; most of their programs have relatively indirect and/or long-term impact. ProVeg is something of a unique case however, as their V-labelling program, which makes up a significant proportion of their expenditure, is mostly indirect in impact, but is also revenue generating.
Speaking more generally, when making recommendation decisions to donors, we are most interested in marginal cost-effectiveness, or the cost-effectiveness of additional funding to a charity. All of our evaluation criteria are indicators of marginal cost-effectiveness. Our quantitative cost-effectiveness estimates are an important indicator of marginal cost-effectiveness, but they are not necessary or sufficient for estimating marginal cost-effectiveness. If we were to only recommend charities for which we could produce these estimates, we would be biasing ourselves in favor of more measurable short-term outcomes, at the cost of promising long-term or indirect change.
As more research becomes available, we hope to have a better understanding of the long-term and less direct outcomes of different interventions. At that point, we will be able to produce more useful estimates for long-term and indirect change.
I share the same concerns about internal social media policies, especially when it comes to stifling discussion staff members would have otherwise engaged in. The main reason I rarely engage in EA discussions is that I’m afraid what I write will be mistaken as representative of my employer—not just in substance, but also tone/sophistication.
I think it’s fairly standard now for organizations to request that employees include a disclaimer when engaging in work-related conversations—something like “these are my views and not necessarily those of my employer”. That seems reasonable to include in the first comment, but becomes cumbersome in subsequent responses. And in instances where comments are curated without context, the disclaimer might not be included at all.
Also, I wonder how much the disclaimer helps someone distinguish the employee from the organization? For highly-visible people in leadership roles, I suspect their views are often conflated with the views of the organization.
Hi everyone! I’m here to formally respond to Sarah’s article, on behalf of ACE. It’s difficult to determine where the response should go, as it seems there are many discussions, and reposting appears to be discouraged. I’ve decided to post here on the EA forum (as it tends to be the central meeting place for EAs), and will try to direct people from other places to this longer response.
Firstly, I’d like to clarify why we have not inserted ourselves into the discussion happening in multiple Facebook groups and fora. We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member). We are aware that this might come across as “radio silence” or lack of concern for the criticism at hand—but that is not the case. Whenever there are legitimate critiques about our work, we take it very seriously. When there are accusations of intent to deceive, we do not take them lightly. The last thing we want to do is respond in haste only to realize that we had not given the criticism enough consideration. We also want to allow the community to discuss amongst themselves prior to posting a response. This is not only to encourage discussion amongst individual members of the community, but also so that we can prioritize responding to the concerns shared by the greatest number of community members.
It is clear to us now that we have failed to adequately communicate the uncertainty surrounding the outcomes of our leafleting intervention report. We absolutely disagree with claims of intentional deception and the characterization of our staff as acting in bad-faith—we have never tried to hide our uncertainty about the existing leafleting research report, and as others have pointed out, it is clearly stated throughout the site where leafleting is mentioned. However, our reasoning that these disclaimers would be obvious was based on the assumption that those interested in the report would read it in its entirety. After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be. We have added a longer disclaimer to the top of our leafleting report page, expressing our current thoughts and noting that we will update the report sometime in 2017.
In addition, we have decided to remove the impact calculator (a tool which included an ability to enter donations directed to leafleting and receive estimates of high and low bounds of animals spared) from our website entirely until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations and/or an understanding of how the low/best/high error bounds exemplify the uncertainty regarding those numbers. It is not typical for us to remove content from the site, but we intend to operate with abundant caution. This change seems to be the best option, given that people believe we are being intentionally deceptive in keeping them online.
Finally, leadership at ACE all agree it has been too long since we have updated our Mistakes page, so we have added new entries concerning issues we have reflected upon as an organization.
We also notice that there is concern among the community that our recommendations are suspect due to the weak evidence supporting our cost-effectiveness estimates of leafleting. The focus on leafleting for this criticism is confusing to us, as our cost-effectiveness estimates address many interventions, not only leafleting, and the evidence for leafleting is not much weaker than other evidence available about animal advocacy interventions. On top of that, cost-effectiveness estimates are only a factor in one of the seven criteria used in our evaluation process. In most cases, we don’t think that they have changed the outcome of our evaluation decisions. While we haven’t come up with a solution for clarifying this point, we always welcome and are appreciative of constructive feedback.
We are committed to honesty, and are disappointed that the content we’ve published on the website concerning leafleting has caused so much confusion as to lead anyone to believe we are intentionally deceiving our supporters for profit. On a personal note, I’m devastated to hear that our error in communication has led to the character assassination not only of ACE, but of the people who comprise the organization—some of the hardest working, well-intentioned people I’ve ever worked with.
Finally, I would like everyone to know that we sincerely appreciate the constructive feedback we receive from people within and beyond the EA movement.
*Edited to add links
- 14 Jan 2017 16:18 UTC; 14 points) 's comment on Building Cooperative Epistemology (Response to “EA has a Lying Problem”, among other things) by (
Karolina,
Re: the 2019 EA Survey donation data, I think ACE was categorized under Meta? If you take ACE out of that list above the data looks more accurate (~$330k for Animal Welfare).