Hey Vasco, these are my personal thoughts and not FP’s (anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notes—
First, I think it’s totally true that there are some BOTEC errors, many/ most of them mine (thank you GWWC for spotting them— it’s so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same time—these are hugely rough BOTECs, that were never meant to be CEA’s in the rigorous GiveWell sense: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the actual numbers seriously: i expect they’re wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I don’t want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or not— super happy to leave this to GWWC. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me. I will note that it is (imo) insanely difficult to compete with GiveWell.
I think it’s helpful to look at the grant history from FP GHDF. Here’s all the grants that I think have been made by FP GHDF since Jan 2023, apologies if i’ve missed any:
* 4 were GW supported before FP granted to them (New Incentives, Sightsavers, Pure Earth, Evidence Action)
* 3 have already subsequently been GW supported (r.i.c.e, FEM, Suvita)
* 1 was an advocacy grant to 1DaySooner, 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think it’s plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but I’m talking about ‘might have to shut down some or all operations’ or ‘there is a time-sensitive scaling opportunity’). * I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). I think analyses that look at the trajectories of this (in terms of moving counterfactual money) would be really interesting. * Having a few high EV, ‘big if true’ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful. I value the engagement- thanks.
Hey Vasco, these are my personal thoughts and not FP’s (anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notes—
First, I think it’s totally true that there are some BOTEC errors, many/ most of them mine (thank you GWWC for spotting them— it’s so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same time—these are hugely rough BOTECs, that were never meant to be CEA’s in the rigorous GiveWell sense: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the actual numbers seriously: i expect they’re wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I don’t want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or not— super happy to leave this to GWWC. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me. I will note that it is (imo) insanely difficult to compete with GiveWell.
I think it’s helpful to look at the grant history from FP GHDF. Here’s all the grants that I think have been made by FP GHDF since Jan 2023, apologies if i’ve missed any:
* 4 were GW supported before FP granted to them (New Incentives, Sightsavers, Pure Earth, Evidence Action)
* 3 have already subsequently been GW supported (r.i.c.e, FEM, Suvita)
* 4 are ecosystem multiplier-y grants (Giving Multiplier, TLYCS, Effective Altruism Australia, Effektiv Spenden)
* 1 was an advocacy grant to 1DaySooner, 1 was an advocacy grant to LEEP
* 5 are recent young orgs, that we think are promising but ofc supporting young orgs is hit and miss (Ansh, Taimaka, Essential, IMPALA, HealthLearn)
* 1 was deworming research grant
* 1 was an anti-corruption journalism grant which we think is promising due to economic growth impacts (OCCRP)
I think it’s plausible that i spent too little time on these grant evals, and this probably contributed to BOTEC errors. But I feel pretty good about the actual decision-making, although I am very biased:
* At least 2 or 3 of these struck me as being really time-sensitive (all grants are time-sensitive in a way, but I’m talking about ‘might have to shut down some or all operations’ or ‘there is a time-sensitive scaling opportunity’).
* I think there is a bit of a gap for early-ish funding, and benefits to opening up more room for funding by scaling these orgs (i.e. funding orgs beyond seed funding, but before orgs can absorb or have the track record for multi million grants). I think analyses that look at the trajectories of this (in terms of moving counterfactual money) would be really interesting.
* Having a few high EV, ‘big if true’ grants feels reasonable to me (advocacy, economic growth, R and D).
I hope this context is useful. I value the engagement- thanks.