Karolina Sarek is the Chair of the Effective Altruism Animal Welfare Fund, where she has worked as a part-time fund manager since 2019. Previously, she was the Co-founder and Co-Executive Director at Ambitious Impact (formerly Charity Entrepreneurship). She also served as a board member and advisor for various nonprofits and think tanks.
KarolinaSarekšø
AWF is lookĀing for full-time or part-time fund managers
Thank you for these thorough reports and the project as a whole! As Chair of the EA Animal Welfare Fund, Iām very grateful for GWWCās continued work evaluating evaluators and grantmakers in the animal welfare space and personally grateful for their work across all cause areas. I think this sort of meta-evaluation is incredibly valuable. This year, Iām particularly excited to see ACEās Movement Grants join the recommended list this yearātheir improvements are great developments for the field. Last yearās evaluation of AWF was also very helpful for our team, and weāre looking forward to the re-evaluation next year. Itās encouraging to see the evaluation and funding ecosystem becoming increasingly robust. Thank you for your work!
AnĀiĀmal Welfare Fund: PayĀout recomĀmenĀdaĀtions from April to OcĀtoĀber 2024
Thank you, Angelina! Iām very excited about it, too!
Thank you for raising this question, Emre! We value transparency and recognize how outcome data helps potential donors make informed decisions. We would like to move more toward that direction, but there are some limitations to that.
First, we could only present the result in aggregation potentially with individual data for successful grants/āwork. Publishing individual grant outcomes, particularly unsuccessful ones, could discourage grantee candor and lead others to draw overly broad conclusions about interventions or grantee capabilities. Thatās why we lean toward aggregate reportingāfor example, sharing overall success rates or highlighting particularly impactful grants that make up the bulk of the impact.
The second limitation is our capacity. Even after I joined the fund in greater capacity, we still only have 1.3 FTE, most of which goes toward grant sourcing, evaluation, decisions, and internal impact tracking. While weāre planning to expand our team soon, we need to carefully balance any new initiatives with other strategic priorities. Therefore, even though we would be excited to increase the amount of public grant outcome reporting, weāre still assessing the extent to which we can implement it while balancing other goals.
We hope that the steps we are taking right now, like regularly publishing payout reports, and annual reports like the one above, will already help supporters assess our work until we could do more on that front.
Thank you for the update about your program!
In general, the examples we listed in the post are not exhaustive, and there are opportunities that havenāt been explicitly mentioned, so I encourage reaching out to me if one is interested in making a contribution and would like to learn more about the opportunities we identified.
How much exĀtra fundĀing can EA AWF reĀgrant?
Thanks! We are grateful for all the work that our grantees do.
From our experience, in general, work in Europe tends to be more tractable than in North America, especially on the margin. This is especially true for policy opportunities that show higher expected cost-effectiveness in the European context than in the US. When I look at the grants we funded in Europe over the last year, many are indeed focused on policy advocacy. What also plays a role is that we simply receive more applications from European organizations, which naturally affects our grant distribution.
Hey Vasco! I agree that AWF should be more transparent, and since I started working on it full-time, we have more capacity for that, and we are planning to communicate about our work more proactively.
In light of that, we just published a post summarizing how 2024 went, what changes we recently introduced, and what we are planning. We touched on updates to our evaluation process as well. Here is the relevant section from that post:
āGrant investigations:
Updated grant evaluation framework: Weāve updated our systematic review process, enabling us to evaluate every application using standardized templates that vary based on the required depth of investigation. This framework ensures a thorough assessment of key factors while maintaining flexibility for grant-specific considerations. For example, for the deep evaluations, (which are the vast majority of all evaluations), key evaluation areas include assessment of the projectās Theory of Change, scale of counterfactual impact, likelihood of success, back-of-the-envelope cost-effectiveness and benchmarking, and the expected value of receiving funding. It also includes forecasting grant outcomes. You can read more about our process in the FAQ.
Introduced new decision procedures for marginal grants: We introduced an additional step in our evaluation that enables us to make better decisions about grants that are just below or just above our funding bar. Since AWF gives grants on a rolling basis rather than in rounds, it is important to have a process for this to ensure decisions are consistent.ā
We also slightly updated our website and added a new question to the FAQāIām copying that below:
āHow Does the EA Animal Welfare Fund Make Grant Decisions?Our grantmaking process consists of the following stages:
Stage 1: Application Processing. When we receive an application, itās entered into our project management system along with the complete application details, history of previous applications from the applicant, evaluation rubrics, investigator assignments, and other relevant documentation.
Stage 2: Initial Screening. We conduct a quick scope check to ensure applications align with our fundās mission and show potential for high impact. About 30% of applications are filtered out at this stage, typically because they fall outside our scope or donāt demonstrate sufficient impact potential.
Stage 3: Selecting Primary Grant Investigator and Depth of the Evaluation. For applications that pass the initial screening, we assign investigators who are most suitable for a given evaluation. Based on various heuristics, such as the size of the grant, uncertainty, and potential risk, the Fundās Chair also determines the depth of the evaluation.
Stage 4: In-Depth Evaluation. Every grant application undergoes a systematic review. For each level of depth of investigation required, AWF has an evaluation template that fund managers follow. The framework balances ensuring that all key factors have been considered and that evaluations are consistent, while leaving space for additional, grant-specific crucial considerations. For the deep evaluations, (which are the vast majority of all evaluations), the primary investigator typically examines:
Theory of Change (ToC) - examining how activities translate into improvements for animals and whether the evidence supports its merits
Scale of counterfactual impactāassessing the problemās scale, neglectedness, and strategic importance
Likelihood of successāevaluating track record, team competence, and concrete plans
Cost-effectiveness and benchmarking- conducting calculations to estimate impact per dollar and compare it to relevant benchmarks
Value of fundingāanalyzing counterfactuals and long-term sustainability
Forecastingāforecasting the probability that the project will succeed or fail and due to what reasons (validity of the ToC or performance in achieving planned outcomes )
In the case of evaluations that require the maximum level of depth, a secondary investigator critically reviews the completed write-up, raises additional questions and concerns, and provides alternative perspectives or recommendations.
Stage 5: Collective Review and Voting. After the evaluation, each application undergoes a thorough collective assessment. The Fund Chair and at least two Fund Managers review the analysis. All Fund Managers without conflicts of interest can contribute additional insights and discuss key questions through dedicated channels. Finally, each Fund Manager assigns a score, which helps us systematically compare the most promising grants.
Stage 6: Final Recommendation Looking at the average score, the Fund Chair approves grants that are clearly above our funding bar and rejects those clearly below it. For grants near our funding threshold, we conduct another step where all found managers compare those marginal grants against each other to select the strongest proposals.
Once decisions are finalized, approved grants move to our grants team for contracting and reporting setup.
Throughout this process, we maintain detailed documentation and apply consistent standards to ensure we select the most promising opportunities to help animals most effectively.ā
EA AnĀiĀmal Welfare Fund: 2024 ReĀview, Changes, and Plans
Thanks Ozzie!
> āIām not sure if this was done very intentionally, or thatās more a representation of who applied, but overall, Iām more net-optimistic about investments in larger projects.
[...]
Now, especially with the recent changes at OP, it seems like some significant animal cause areas (invertebrate welfare) will likely be overlooked by other funders. Iād expect that going forward, there should be significant opportunities for other funders to be active here, and Iād expect much of the gain would come from funding larger projects. ā
This represents who applied at the time, how developed some of the projects are, and how uncertain their outcomes are. We would often fund āan experimental, new projectā for 6 months for a pilot, then for 1 year, and if it is proven, we would provide a larger-scale grant. Sometimes a project of this type also āgraduatesā to a larger funder like Open Phil and thatās why you do not see them here. EA AWFās comparative advantage is often in funding small and medium-scale projects and I think it makes sense to serve this role in the project development pipeline.
That being said, there are some grantees that have a strong track record in areas where EA AWF has a comparative advantage and we provide larger grants ($150-$400k). Those typically include projects in wild animals, invertebrate-related work and research on neglected species, although not exclusively. We plan to continue and hopefully scale our grantmaking in those areas given the Good Ventiured update.Additionally, there were also instances where we would like to provide a larger amount to top applicants, but thought that the value of the marginal grant was higher than more funding for top applicants. If we had more funding, we would have provided both and in the past, have communicated that EA AWF has significant RFMF. This is still the case.
Hey Vasco,
Yes, itās right that we donāt conduct CEAs in all of our evaluations, but they are part of our analysis for some of our grant investigations. GWWC only looked at 10 grant evaluations, so itās possible they didnāt come across those where we did model BOTEC CEA. With the upcoming increase in the capacity of the fund, we plan to invest more in creating BOTECs for more evaluation. We are hoping to be reevaluated by GWWC so the evaluation reflects the changes we have made and are planning to make in the future.
In the past, we tended to do CEAs more often if: a) The project is relatively well-suited to a back-of-the-envelope calculation b) A back-of-the-envelope calculation seems decision-relevant. At that time, a) and b) seem true in a minority of cases, maybe ~10%-20% of applications depending on the round, to give some rough sense. However, note that there tends to be some difference between projects in areas or by groups we have already evaluated versus projects/āgroups/āareas that are newer to us. Iād say newer projects/āgroups/āareas are more likely to receive a back-of-the-envelope style estimate.Even in evaluations where we didnāt explicitly model CEA, we tended to look more at factors that help us judge marginal cost-effectiveness, such as the scale of the problem and potential number of animals affected, whether the work is happening in a country with high production of target species, how neglected it is (to get at the counterfactual impact), the goals of the grant and whether we think the applicant is likely to achieve them given their track record or strength of the plan. We also use and reference more in-depth independent CEAs, like the one on cage-free corporate outreach, shrimp stunning, ballot initiatives or fish stunning while noting that they have limitations and we do not take them at face value.
However, since then, weāve started conducting BOTEC CEA more frequently and using benchmarking in more of our grant evaluations. For example, we sometimes use this BOTEC template and compare the outcomes to cage-free corporate campaigns (modified for our purposes from a BOTEC that accompanied RPās Welfare Range Estimates).
For harder-to-quantify grants like movement or capacity building, we would also occasionally model expected outcomes in numerical terms and ask whether this outcome is something we would pay x amount (the expected cost per unit).
We also have a score calibration guide we use when we score grants to make them comparable across grants.
We do not put that much weight in applicantās CEA as they are impossible to compare to CEAs that use different methodologies and are very sensitive to assumptions that we often cannot verify.
I hope that helps to understand our methodology. Let me know if you have any questions.
AnĀiĀmal Welfare Fund: PayĀout recomĀmenĀdaĀtions from May 2022 to March 2024
Thanks! Can you tell me more about why you think improving dissolved oxygen is not a good idea? I still consider poor dissolved oxygen to be a major welfare problem for fish in the setting where the charity is expected to operate, and improving it through various means (assuming we also keep stocking density constant or decreasing it) would be good for their welfare. This has been validated in the field by FWI in this assessment and studied by others, so Iām a bit surprised. Unless you are referring to specific interventions to improve dissolved oxygen, of which I have high uncertainty about their cost-effectiveness.
And about the report you link, I broadly agree and have written about it below.
[previous comment is deleted, because I accidentally sent an unfinished one]
Thanks for the example! That makes sense and makes me wonder if part of the disagreement came from thinking about different reference classes. I agree that, in general, the research we did in our first year of operations, so 2018/ā2019, is well below the quality standard we expect of ourselves now, or what we expected of ourselves even in 2020. I agree it is easy to find a lot of errors (that werenāt decision-relevant) in our research from that year. That is part of the reason they are not on the website anymore.That being said, I still broadly support our decision not to spend more time on research that year. Thatās because spending more time on it would have come at a cost of significant tradeoff. At the time, there was no other organization whose research we could have relied on, and the alternative to the assessment you mention was either to not compare interventions across species (or reduce it to a simplistic metric like āthe number of animals affectedā metric) or to spend more time on research and run Incubation Program a year later in which case we would have lost a year of impact and might not have started the charities we did. That would have been a big loss because for example, that year we incubated Suvita whose impact and promise were recently recognized by GiveWell that, provided Suvita with $3.3M to scale up, or we incubated Fish Welfare Initiative (FWI) and Animal Advocacy Careers a decision I still consider to be a good one (FWI is an ACE Recommended Charity, and even though I agree with its co-founders that their impact could be higher, Iām glad they exist). We also couldnāt simply hire more staff and do things more in-depth because it was our first year of operation, and there was not enough funding and other resources available for, at the time, an unproven project.
I wouldnāt want to spend more time on that, especially because one of the main principles of our research is ādecision-relevance,ā and the āwild bugā one-pager you mention or similar ones were not relevant. If it were, we would not have settled on something of that quality, and we would have put more time into it.
For what it is worth, I think there are things we could have done better. Specifically, we could have put more effort into communicating how little weight others should put on some of that research. We did that by stating at the top (for example, as in the wild bug one-pager you link), āthese reports were 1-5 hours time-limited, depending on the animal, and thus are not fully comprehensive.ā and at the time, we thought it was sufficient. But we could have stressed epistemic status even more strongly and in more places so it is clear to others that we put very little weight on it. For full transparency, we also made another mistake. We didnāt recommend working on banning/āreducing bait fish as an idea at the time because, from our shallow research, it looked less promising, and later, upon researching it more in-depth, we decided to recommend it. It wouldnāt have made a difference then because there were not enough potential co-founders in year 1 to start more charities, but it was a mistake, nevertheless.
- 29 Mar 2024 17:47 UTC; 11 points) 's comment on AIM AnĀiĀmal Initiatives by (
Thanks for clarifying! We always have an expert view section in the report, and often consult animal science specialists, but it is possible we missed something. Could you tell me where specifically we made a mistake regarding animal science that could have changed the recommendation? I want to look into it, to fact-check it, and if it is right not to make this mistake in the future.
2. CEās charities working on animal welfare have mostly not been very good, and listening to external feedback prior to launching them would have told them this would happen.
[...] doesnāt do CEās original proposed idea anymoreOn the point of the charities not doing CEās originally proposed idea anymore, I want to clarify that we donāt see charities tweaking an idea as a failure but rather as the expected course of action we encourage. We are aware of the limitations of desktop research (however in-depth), and we encourage organizations to quickly update based on country visits, interactions with stakeholders, and pilot programs they run. There are just some informations that a researcher wouldnāt be able to get, and they need input from someone working on the ground. For example, when Rethink Priorities was writing their report on shrimp welfare, they consulted SWP extensively to gain that perspective. Because CE charities operate in extremely neglected cause areas, there is often no other āimplementerā our research team can rely on. Therefore, organizations are usually expected to change the idea as they learn in their first months of operations. I see this as a success in ingraining the values of changing oneās mind in the face of new evidence, seeking this evidence, and making good decisions on the side of co-founders with the support of their CE mentors, and we are happy when we see it happen.
There is a complex trade-off to be made when balancing the learning value from more in-depth desktop research vs. more time spent on learning as one implements, and I donāt think CE always gets it right, but the latter perspective is often misunderstood and underappreciated in the EA space.
Regarding charities specifically, in general, we expect about a 2ā5 āhit rateā (rarely because of the broad idea being bad, more often because the implementation is challenging for one reason or another), and many people, including external charity evaluators and funders, have a different assessment of some of the charities you list. That being said, if you have any specific feedback about the incubated organizationās strategies or ideas, please reach out to them. As you mentioned, they are open to hearing input and feedback. Similarly, if you have specific suggestions about how CE can improve its recommendations, please get in touch with our Director of Research at sam@charityentrepreneurship.com; we appreciate specific feedback and conversation about how we can improve. Thank you for your support of multiple CE charities so far!
Thank you, Benjamin, for writing this in-depth profile and to the whole 80,000 hours team for your work!
Since grantmaking is one of the highlighted careers, Iām going to allow myself to shamelessly plug two opportunities at the EA Animal Welfare Fund that we posted today: full-time and part-time Fund Manager role (deadline is 29th of December) and our expression of interest form for the Fund Development Officer/āManager/āDirector position.