Hey Vasco, these are my personal thoughts and not FP’s (I have now left FP, and anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notes—
First, I think it’s totally true that there are some BOTEC errors, many/ most of them mine (thank you GWWC for spotting them— it’s so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same time—these are hugely rough BOTECs, that were never meant to be rigorous CEA’s: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the exact numbers seriously: i expect they’re wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I don’t want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or not— super happy to leave this to others. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me.
I think it’s helpful to look at the grant history from FP GHDF. Here’s all the grants that I think have been made by FP GHDF since Jan 2023, apologies if i’ve missed any:
* New Incentives, Sightsavers, Pure Earth, Evidence Action
* r.i.c.e, FEM, Suvita (roughly- currently scaling orgs, that were considerably smaller/ more early stage when we originally granted)
* 4 are ecosystem multiplier-y grants (Giving Multiplier, TLYCS, Effective Altruism Australia, Effektiv Spenden)
* 1 was an advocacy grant to 1DaySooner (malaria vax roll out stuff), 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think it’s plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but I’m talking about ‘might have to shut down some or all operations’ or ‘there is a time-sensitive scaling opportunity’).
* I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). Its still early days, but feel pretty good about the trajectories of the young orgs that FP GHDF supported.
* Having a few high EV, ‘big if true’ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful, and note that I can’t speak to FP’s current/ future plans with the FP GHDF. I value the engagement- thanks.
Hey Aidan,
I want to acknowledge my potential biases for any new comment thread readers (I used to be the senior researcher running the fund at FP, most or all of the errors highlighted in the report are mine, and I now work at GiveWell.) these are personal views.
I think getting further scrutiny and engagement into key grantmaking cruxes is really valuable. I also think this discussion this has prompted is cool. A few points from my perspective-
As Matt’s comment points out- there is a historical track record for many of these grants. Some have gone on the be GiveWell supported, or (imo) have otherwise demonstrated success in a way that suggests they were a ‘hit’. In fact, with the caveat that there are a good number of recent ones where it’s too early to tell, there hasn’t yet been one that I consider a ‘miss’. Is it correct to update primarily from 3 spot checks of early stage BOTECs (my read of this report) versus updating from what actually happened after the grant was made? Is this risking goodharting?
Is this really comparing like for like? In my view, small grants shouldn’t require as strong an evidence base as like, a multimillion grant, mainly due to the time expenditure reasons that Matt points out. I am concerned about whether this report is getting us further to a point where (due to the level of rigour and therefore time expenditure required) the incentives for grantmaking orgs are to only make really large grants. I think this systematically disadvantages smaller orgs, and I think this is a negative thing (I guess your view here partially depends on your view on point ‘3’ below.)
In my view, a really crucial crux here is really about the value of supporting early stage stuff, alongside other potentially riskier items, such as advocacy and giving multipliers. I am genuinely uncertain, and think that smart and reasonable people can disagree here. But I agree with Matt’s point- that there’s significant upside through potentially generating large future room for funding at high cost effectiveness. This kind of long term optionality benefit isn’t typically included in an early stage BOTEC (because doing a full VOI is time consuming) and I think it’s somewhat underweighted in this report.
I no longer have access to the BOTECs to check (since I’m no longer at FP) and again I think the focus on BOTECs is a bit misplaced. I do want to briefly acknowledge though that I’m not sure that all of these are actually errors (but I still think it’s true that there are likely some BOTEC errors, and I think this would be true for many/ most orgs making small grants).