Thanks, Aidan and Sjir! I really like this project.
While our decision is not to recommend FP GHDF at this time, we would like to emphasise that we did not conclude that the marginal cost-effectiveness of the GHDF is unambiguously not competitive with GiveWellâs recommended charities and funds â in fact, we think the GHDF might be competitive with GiveWell now or in the near future.
I do not understand why you âthink the GHDF might be competitive with GiveWell nowâ. From your report, it seems quite clear to me that donating to GiveWellâs funds is better now. You âlooked into [3] grants that were relatively large (by FP GHDF standards), which we expect to be higher effort/âqualityâ, and found major errors in 2 grants, and a major methodological oversight in the other.
Grant 1
When the grant was made, the opportunity was determined to be 42x GiveDirectly. However, on a re-evaluation for a later grant to the same organisation (where the original BOTEC was re-purposed), an extra zero was discovered in one of the inputs that had been used â which, when remedied, reduced the overall estimate to 12x GiveDirectly.
[...]
Grant 2
In the second grant we looked into, the effect size used to estimate the impact of the grant applied to children above a certain age, but annualised total childhood mortality in the region was used as the assumed base mortality rate to which the effect size was applied. Because most childhood mortality occurs in the first few months of life, we believe this is a mistake that overestimated the impact of the intervention by about 50%. When we applied adjustments to the model to account for this, the overall cost-effectiveness of the BOTEC fell below 10x GiveDirectly.
[...]
Grant 3
In the third grant we reviewed, our major concern was that FP modelled the probability that a novel technology would be significantly scaled up at 35%, with what we considered to be insufficient justification. This was particularly concerning because small reductions to this estimate resulted in the BOTEC going below 10x GiveDirectly.
Notably, in a subsequent evaluation conducted about 6 months after the first, FP adjusted this probability down to 25%. While we think it is positive FP re-evaluated this:
1) We still think this probability could be too high (for similar reasons to those noted in Grant 1).
2) Plugging this number into the original BOTEC sends the estimated cost-effectiveness below 10x GiveDirectly, which reinforces the point that the original BOTEC should not have been relied on in the original grant without further justification.
3) In the more recent evaluation â which repurposed the original grant evaluation and BOTEC, but where the same probability was modelled at 25% rather than 35% â there was no reference to why the number was updated or additional information about why the estimate was chosen. This increases our concerns around FPâs transparency in justifying the inputs used in their BOTECs.
Hey Vasco, these are my personal thoughts and not FPâs (I have now left FP, and anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notesâ
First, I think itâs totally true that there are some BOTEC errors, many/â most of them mine (thank you GWWC for spotting themâ itâs so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same timeâthese are hugely rough BOTECs, that were never meant to be rigorous CEAâs: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the exact numbers seriously: i expect theyâre wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I donât want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or notâ super happy to leave this to others. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me.
I think itâs helpful to look at the grant history from FP GHDF. Hereâs all the grants that I think have been made by FP GHDF since Jan 2023, apologies if iâve missed any:
* New Incentives, Sightsavers, Pure Earth, Evidence Action
* r.i.c.e, FEM, Suvita (roughly- currently scaling orgs, that were considerably smaller/â more early stage when we originally granted)
* 1 was an advocacy grant to 1DaySooner (malaria vax roll out stuff), 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think itâs plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but Iâm talking about âmight have to shut down some or all operationsâ or âthere is a time-sensitive scaling opportunityâ). * I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). Its still early days, but feel pretty good about the trajectories of the young orgs that FP GHDF supported. * Having a few high EV, âbig if trueâ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful, and note that I canât speak to FPâs current/â future plans with the FP GHDF. I value the engagement- thanks.
Thanks for the comment! While we did find issues that we think imply Founders Pledgeâs BOTECs donât convincingly show that the FP GHDFâs grants surpass 10x GiveDirectly in expectation in terms of marginal cost-effectiveness, we donât think we can justifiably conclude from this that we are confident these grants donât pass this bar. As mentioned in the report, this is partly because:
FP may have been sufficiently conservative in other inputs to compensate for the problems we identified
There are additional direct benefits (for example, morbidity benefits) to the grants that FP acknowledged but decided not to model
There may be additional large positive externalities from funding these early-stage and more neglected opportunities
Rosieâs comment also covers some other considerations that bear on this and provides useful context that is relevant here.
Thanks, Aidan and Sjir! I really like this project.
I do not understand why you âthink the GHDF might be competitive with GiveWell nowâ. From your report, it seems quite clear to me that donating to GiveWellâs funds is better now. You âlooked into [3] grants that were relatively large (by FP GHDF standards), which we expect to be higher effort/âqualityâ, and found major errors in 2 grants, and a major methodological oversight in the other.
Worth pointing out that FP staff who could reply on this is on Thanksgiving break, so will probably take until next week.
Hey Vasco, these are my personal thoughts and not FPâs (I have now left FP, and anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notesâ
First, I think itâs totally true that there are some BOTEC errors, many/â most of them mine (thank you GWWC for spotting themâ itâs so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same timeâthese are hugely rough BOTECs, that were never meant to be rigorous CEAâs: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the exact numbers seriously: i expect theyâre wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I donât want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or notâ super happy to leave this to others. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me.
I think itâs helpful to look at the grant history from FP GHDF. Hereâs all the grants that I think have been made by FP GHDF since Jan 2023, apologies if iâve missed any:
* New Incentives, Sightsavers, Pure Earth, Evidence Action
* r.i.c.e, FEM, Suvita (roughly- currently scaling orgs, that were considerably smaller/â more early stage when we originally granted)
* 4 are ecosystem multiplier-y grants (Giving Multiplier, TLYCS, Effective Altruism Australia, Effektiv Spenden)
* 1 was an advocacy grant to 1DaySooner (malaria vax roll out stuff), 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think itâs plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but Iâm talking about âmight have to shut down some or all operationsâ or âthere is a time-sensitive scaling opportunityâ).
* I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). Its still early days, but feel pretty good about the trajectories of the young orgs that FP GHDF supported.
* Having a few high EV, âbig if trueâ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful, and note that I canât speak to FPâs current/â future plans with the FP GHDF. I value the engagement- thanks.
Thanks for the comment! While we did find issues that we think imply Founders Pledgeâs BOTECs donât convincingly show that the FP GHDFâs grants surpass 10x GiveDirectly in expectation in terms of marginal cost-effectiveness, we donât think we can justifiably conclude from this that we are confident these grants donât pass this bar. As mentioned in the report, this is partly because:
FP may have been sufficiently conservative in other inputs to compensate for the problems we identified
There are additional direct benefits (for example, morbidity benefits) to the grants that FP acknowledged but decided not to model
There may be additional large positive externalities from funding these early-stage and more neglected opportunities
Rosieâs comment also covers some other considerations that bear on this and provides useful context that is relevant here.