GiveWell Updates
In our 2025 grantmaking year, GiveWell approved $418 million across 131 grants to 69 organizations—the most grants we’ve made in a year so far.
Through years of deliberate groundwork, we’ve been growing our research capacity and scope in order to direct substantially more funding to the most impactful opportunities we can find. Last year’s grantmaking reflects this growth, and we will be continuing an intensive effort this year to scale our ability to partner with donors to help people in need.
GiveWell recently launched two new RFIs to expand our funding in vaccination and iron fortification and supplementation programs. Submissions for both RFIs are due March 27.
Vaccination outreach RFI: Seeking targeted outreach or mobile vaccination programs that aim to increase uptake of routine vaccinations for children under 2 in the DRC, Nigeria, or Somalia.
Anemia control RFI: Seeking programs that reduce iron deficiency anemia through large-scale iron fortification, supplementation, or biofortification in Africa.
GiveWell aims to find and fund programs that will do the most good per dollar, and we regularly evaluate results to decide whether to continue our support. Sometimes, even if a program we fund is doing a lot of good, it may not have the impact per dollar we expected.
In our latest podcast episode, GiveWell CEO and co-founder Elie Hassenfeld and Senior Program Officer Erin Crossett discuss how we responded when early data on Evidence Action’s Dispensers for Safe Water in Malawi and Uganda indicated the program wasn’t reaching as many people as estimated.
GiveWell is hiring a Salesforce Administrator to deliver exceptional technical support to our internal users and drive continuous improvements to the systems that power our life-saving work.
Hi Tony—This is Mark Walsh, a GiveWell researcher on the team responsible for pressure-testing GiveWell’s research processes and conclusions. Thank you for this thoughtful write-up!
At a high level, I agree with the central thesis: we’ve underinvested in monitoring and evaluation, relative to other components of our analysis. Like you mentioned in your post, we’ve been working to fix that, and I wanted to share a little bit more on what we’ve done so far, what we’re planning, and some other related gaps in our work.
We think this kind of external engagement with our research is valuable and makes our work better. We’d welcome feedback on the steps we’ve taken so far and where we should consider doing more.
What we’ve done
Over the past year, we’ve been working on what we call “M&E red teaming”—a systematic review of the monitoring and evaluation practices of our largest grantees, motivated by the same concerns Tony raises. We’re planning to publish our findings soon but wanted to give a brief overview.
From June to December 2025, we dedicated teams of 3-4 research staff to work full-time for roughly 3 weeks each on six program areas: our four top charities plus water chlorination and malnutrition treatment. For each program, we evaluated many of the dimensions Tony highlights in his monitoring checklist: the independence of data collectors from program staff, the neutrality of the sampling frame, the objectivity and precision of measurement approach, data quality checks and backchecks, timeliness of data, triangulation with independent sources, and whether the program is taking timely action to address any issues raised by monitoring.
Since completing the red teaming, we’ve been doing the following to make improvements:
Setting coverage survey standards. We’ve drafted coverage survey standards that cover survey firm independence, surveyor workloads, sampling protocols, backchecks, objective verification of key outcomes, and data-sharing timelines. We’re starting to solicit feedback on these from grantees to learn more about what’s feasible.
Making several independent monitoring grants. Since August 2025, we’ve made 14 grants for independent monitoring and evaluation (see list of grants). For example:
We funded four research organizations (Asante Research Institute, Innovations for Poverty Action, ORB, and Universite de Kinshasa) to conduct vaccination coverage surveys in South Sudan, Nigeria, Ethiopia, and the Democratic Republic of the Congo.
We funded Marakuja to conduct a survey of net coverage, usage, and malaria burden in the Democratic Republic of the Congo.
We funded Innovations for Poverty Action to conduct a survey of net coverage and usage, seasonal malaria chemoprevention coverage and adherence, vitamin A supplementation coverage, and vaccination coverage in Nigeria.
We funded IDinsight to conduct coverage surveys, health facility data quality checks, and stakeholder interviews to evaluate malnutrition programs in Northern Nigeria.
We required each of the 12 implementing partners funded through our chlorination Request for Information to contract with external survey firms and funded the Aquaya Institute to provide guidance on best practices (e.g., chlorination measurement, E. coli measurement, sampling frames, etc.) and regular quality control checks during the data collection period. In November 2024, we also funded Aquaya to directly monitor and evaluate the in-line chlorination pilot of Uduma.
Ensuring grantees are adequately monitoring critical activities. We’ve made it standard practice to identify the critical activities required for the desired impact and to ensure grantees have plans for monitoring these in real-time and addressing issues quickly.
Better data on population and program costs: The red teaming highlighted these as two other areas that are central to implementation that we think we’ve underinvested in. To address this, we have funded independent population estimates in key countries, are developing budget and financial reporting templates for some program areas, and are working on internal guidance and tools for grantmakers (e.g., a dashboard with alternative population estimates and rules of thumb for when/where to trust what estimates).
As Tony suggests, we think the right way to evaluate these investments is based on the value of the information we’ll gain and the impact we think it will have on our future grantmaking decisions. That means that we are more likely to fund expensive M&E in large grantmaking areas for GiveWell (or areas with a lot of room for more funding) and where we have more uncertainty so expect the M&E to affect our grantmaking a lot. For example, we’ve funded expensive independent coverage surveys of insecticide-treated nets and vaccinations programs because we direct a lot of funding to these programs. On the other hand, we are trying to take a rigorous but lighter-touch approach in cases where there is less funding at-stake, or where key uncertainties (e.g., whether an organization can establish partnerships or hire effectively) can be resolved more cheaply before investing in expensive M&E.
What’s next
Over the next several months, we’re planning to finalize and roll out our coverage survey standards with grantees; analyze the results from the independent surveys, enhanced M&E, and population/costs work we’ve commissioned; and dig further on more of these areas (e.g., investigating better ways to monitor the impact of our grantees on disease morbidity and mortality).
I expect we’ll learn a lot as we go about what’s feasible and what information is actually most valuable. I’m sure our approach will change as we learn more about both the cost and potential impact of this information. That being said, I think we are on the right track.
A broader gap
I also want to flag that we think the issues Tony raises are part of a broader gap. It’s not just that we need better quantitative monitoring—we also need to invest more in understanding what’s actually happening on the ground with the programs we fund.
We’ve been trying to gather more “local insights” on our work. This involves site visits, qualitative research, conversations with local experts, and other ways of testing our desk-based assumptions against what’s happening on the ground. One example is funding the Busara Center for Behavioral Economics to observe vitamin A supplementation delivery in Nigeria and interview households, front-line staff, and government officials about the program.
We’re still figuring out which approaches are most useful but think shifting more of our research effort “beyond the spreadsheet” in the ways Tony is describing is directionally right and something we’re making progress on. As I said at the start, we welcome feedback on our work so far—and on our future progress as it occurs.