Thanks, Aidan and Sjir! I really like this project.
While our decision is not to recommend FP GHDF at this time, we would like to emphasise that we did not conclude that the marginal cost-effectiveness of the GHDF is unambiguously not competitive with GiveWell’s recommended charities and funds — in fact, we think the GHDF might be competitive with GiveWell now or in the near future.
I do not understand why you “think the GHDF might be competitive with GiveWell now”. From your report, it seems quite clear to me that donating to GiveWell’s funds is better now. You “looked into [3] grants that were relatively large (by FP GHDF standards), which we expect to be higher effort/quality”, and found major errors in 2 grants, and a major methodological oversight in the other.
Grant 1
When the grant was made, the opportunity was determined to be 42x GiveDirectly. However, on a re-evaluation for a later grant to the same organisation (where the original BOTEC was re-purposed), an extra zero was discovered in one of the inputs that had been used — which, when remedied, reduced the overall estimate to 12x GiveDirectly.
[...]
Grant 2
In the second grant we looked into, the effect size used to estimate the impact of the grant applied to children above a certain age, but annualised total childhood mortality in the region was used as the assumed base mortality rate to which the effect size was applied. Because most childhood mortality occurs in the first few months of life, we believe this is a mistake that overestimated the impact of the intervention by about 50%. When we applied adjustments to the model to account for this, the overall cost-effectiveness of the BOTEC fell below 10x GiveDirectly.
[...]
Grant 3
In the third grant we reviewed, our major concern was that FP modelled the probability that a novel technology would be significantly scaled up at 35%, with what we considered to be insufficient justification. This was particularly concerning because small reductions to this estimate resulted in the BOTEC going below 10x GiveDirectly.
Notably, in a subsequent evaluation conducted about 6 months after the first, FP adjusted this probability down to 25%. While we think it is positive FP re-evaluated this:
1) We still think this probability could be too high (for similar reasons to those noted in Grant 1).
2) Plugging this number into the original BOTEC sends the estimated cost-effectiveness below 10x GiveDirectly, which reinforces the point that the original BOTEC should not have been relied on in the original grant without further justification.
3) In the more recent evaluation — which repurposed the original grant evaluation and BOTEC, but where the same probability was modelled at 25% rather than 35% — there was no reference to why the number was updated or additional information about why the estimate was chosen. This increases our concerns around FP’s transparency in justifying the inputs used in their BOTECs.
Hey Vasco, these are my personal thoughts and not FP’s (I have now left FP, and anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notes—
First, I think it’s totally true that there are some BOTEC errors, many/ most of them mine (thank you GWWC for spotting them— it’s so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same time—these are hugely rough BOTECs, that were never meant to be rigorous CEA’s: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the actual numbers seriously: i expect they’re wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I don’t want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or not— super happy to leave this to GWWC. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me.
I think it’s helpful to look at the grant history from FP GHDF. Here’s all the grants that I think have been made by FP GHDF since Jan 2023, apologies if i’ve missed any:
* New Incentives, Sightsavers, Pure Earth, Evidence Action
* r.i.c.e, FEM, Suvita (roughly- currently scaling orgs, that were considerably smaller/ more early stage when we originally granted)
* 1 was an advocacy grant to 1DaySooner, 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think it’s plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but I’m talking about ‘might have to shut down some or all operations’ or ‘there is a time-sensitive scaling opportunity’). * I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). Its still early days, but feel pretty good about the trajectories of the young orgs that FP GHDF supported. * Having a few high EV, ‘big if true’ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful, and note that I can’t speak to FP’s current/ future plans with the FP GHDF. I value the engagement- thanks.
Thanks, Aidan and Sjir! I really like this project.
I do not understand why you “think the GHDF might be competitive with GiveWell now”. From your report, it seems quite clear to me that donating to GiveWell’s funds is better now. You “looked into [3] grants that were relatively large (by FP GHDF standards), which we expect to be higher effort/quality”, and found major errors in 2 grants, and a major methodological oversight in the other.
Worth pointing out that FP staff who could reply on this is on Thanksgiving break, so will probably take until next week.
Hey Vasco, these are my personal thoughts and not FP’s (I have now left FP, and anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notes—
First, I think it’s totally true that there are some BOTEC errors, many/ most of them mine (thank you GWWC for spotting them— it’s so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same time—these are hugely rough BOTECs, that were never meant to be rigorous CEA’s: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the actual numbers seriously: i expect they’re wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I don’t want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or not— super happy to leave this to GWWC. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me.
I think it’s helpful to look at the grant history from FP GHDF. Here’s all the grants that I think have been made by FP GHDF since Jan 2023, apologies if i’ve missed any:
* New Incentives, Sightsavers, Pure Earth, Evidence Action
* r.i.c.e, FEM, Suvita (roughly- currently scaling orgs, that were considerably smaller/ more early stage when we originally granted)
* 4 are ecosystem multiplier-y grants (Giving Multiplier, TLYCS, Effective Altruism Australia, Effektiv Spenden)
* 1 was an advocacy grant to 1DaySooner, 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think it’s plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but I’m talking about ‘might have to shut down some or all operations’ or ‘there is a time-sensitive scaling opportunity’).
* I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). Its still early days, but feel pretty good about the trajectories of the young orgs that FP GHDF supported.
* Having a few high EV, ‘big if true’ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful, and note that I can’t speak to FP’s current/ future plans with the FP GHDF. I value the engagement- thanks.