I want to acknowledge my potential biases for any new comment thread readers (I used to be the senior researcher running the fund at FP, most or all of the errors highlighted in the report are mine, and I now work at GiveWell.) these are personal views.
I think getting further scrutiny and engagement into key grantmaking cruxes is really valuable. I also think this discussion this has prompted is cool. A few points from my perspective-
As Matt’s comment points out- there is a historical track record for many of these grants. Some have gone on the be GiveWell supported, or (imo) have otherwise demonstrated success in a way that suggests they were a ‘hit’. In fact, with the caveat that there are a good number of recent ones where it’s too early to tell, there hasn’t yet been one that I consider a ‘miss’. Is it correct to update primarily from 3 spot checks of early stage BOTECs (my read of this report) versus updating from what actually happened after the grant was made? Is this risking goodharting?
Is this really comparing like for like? In my view, small grants shouldn’t require as strong an evidence base as like, a multimillion grant, mainly due to the time expenditure reasons that Matt points out. I am concerned about whether this report is getting us further to a point where (due to the level of rigour and therefore time expenditure required) the incentives for grantmaking orgs are to only make really large grants. I think this systematically disadvantages smaller orgs, and I think this is a negative thing (I guess your view here partially depends on your view on point ‘3’ below.)
In my view, a really crucial crux here is really about the value of supporting early stage stuff, alongside other potentially riskier items, such as advocacy and giving multipliers. I am genuinely uncertain, and think that smart and reasonable people can disagree here. But I agree with Matt’s point- that there’s significant upside through potentially generating large future room for funding at high cost effectiveness. This kind of long term optionality benefit isn’t typically included in an early stage BOTEC (because doing a full VOI is time consuming) and I think it’s somewhat underweighted in this report.
I no longer have access to the BOTECs to check (since I’m no longer at FP) and again I think the focus on BOTECs is a bit misplaced. I do want to briefly acknowledge though that I’m not sure that all of these are actually errors (but I still think it’s true that there are likely some BOTEC errors, and I think this would be true for many/ most orgs making small grants).
Hi Rosie, thanks for sharing your thoughts on this! It’s great to get the chance to clarify our decision-making process so it’s more transparent, in particular so readers can make their own judgement as to whether or not they agree with our reasoning about FP GHDF. Some one my thoughts on each of the points you raise:
We agree there is a positive track record for some of FP GHDF’s grants and this is one of the key countervailing considerations against our decision not to rely on FP GHDF in the report. Ultimately, we concluded that the instances of ‘hits’ we were aware of were not sufficient to conclude that we should rely on FP GHDF into the future. Some of our key reasons for this included:
These ‘hits’ seemed to fall into clusters for which we expect there is a limited supply of opportunities, e.g., several that went on to be supported by GiveWell were AIM-incubated charities. This means, we expect these opportunities to be less likely to be the kinds of opportunities that FP GHDF would fund on the margin with additional funding
We were not convinced that these successes would be replicated in the future under the new senior researcher (see our crux relating to consistency of the fund).
Ultimately, what we are trying to do is establish where the next dollar can be best spent by a donor. We agree it might not be worth it for a researcher to spend as much time on small grants, but this by itself should not be a justification for us to recommend small grants over large ones (agree point 3 can be a relevant consideration here though).
We agree that the relative value donors place on supporting early stage and riskier opportunities compared to more established orgs could be a crux here. However, we still needed a bar against which we could assess FP GHDF (i.e., we couldn’t have justifiably relied on FP GHDF on the basis of this difference in worldview, independent of the quality of FP GHDF’s grantmaking). As such, we tried to assess whether FP GHDF grant evaluations convincingly demonstrated that opportunities met their self-stated bar. As we have acknowledged in the report, just because we don’t think the grant evaluations convincingly show opportunities meet the bar, doesn’t mean they really don’t (e.g., the researcher may have considered information not included in the grant evaluation report). However, we can only assess on the basis of the information we reviewed.
Regarding our focus on the BOTECs potentially being misplaced, I want to be clear that we did review all of these grant evaluations in full, not just the BOTECs. If we thought the issues we identified in the BOTECs were sufficiently compensated for by reasoning included in the grant evaluations more generally this would have played a part in our decision-making. I think assessing how well the BOTECs demonstrate opportunities surpass Founders Pledge’s stated bar was a reasonable evaluation strategy because: a) As mentioned above, these BOTECs were highly decision relevant — grants were only made if BOTECs showed opportunities to surpass 10x GiveDirectly and we know of no instances where an opportunity scored above 10x GiveDirectly and would not have been eligible for FP GHDF funding. b) The BOTECs are where many of the researcher’s judgements are made explicit and so can be assessed. At least for the three evaluations we reviewed in detail, a significant fraction of the work in the grant evaluation was justifying inputs to the BOTECs. On the other point raised here, it is true that not all of the concerns we had with the BOTECs were errors. Some of our concerns related to inputs that seemed (to us) optimistic and were, in our view, insufficiently justified considering the decision-relevant effect they had on the overall BOTEC. While not errors, these made it more difficult for us to justifiably conclude that the FP GHDF grants were in expectation competitive with GiveWell.
Hey Aidan,
I want to acknowledge my potential biases for any new comment thread readers (I used to be the senior researcher running the fund at FP, most or all of the errors highlighted in the report are mine, and I now work at GiveWell.) these are personal views.
I think getting further scrutiny and engagement into key grantmaking cruxes is really valuable. I also think this discussion this has prompted is cool. A few points from my perspective-
As Matt’s comment points out- there is a historical track record for many of these grants. Some have gone on the be GiveWell supported, or (imo) have otherwise demonstrated success in a way that suggests they were a ‘hit’. In fact, with the caveat that there are a good number of recent ones where it’s too early to tell, there hasn’t yet been one that I consider a ‘miss’. Is it correct to update primarily from 3 spot checks of early stage BOTECs (my read of this report) versus updating from what actually happened after the grant was made? Is this risking goodharting?
Is this really comparing like for like? In my view, small grants shouldn’t require as strong an evidence base as like, a multimillion grant, mainly due to the time expenditure reasons that Matt points out. I am concerned about whether this report is getting us further to a point where (due to the level of rigour and therefore time expenditure required) the incentives for grantmaking orgs are to only make really large grants. I think this systematically disadvantages smaller orgs, and I think this is a negative thing (I guess your view here partially depends on your view on point ‘3’ below.)
In my view, a really crucial crux here is really about the value of supporting early stage stuff, alongside other potentially riskier items, such as advocacy and giving multipliers. I am genuinely uncertain, and think that smart and reasonable people can disagree here. But I agree with Matt’s point- that there’s significant upside through potentially generating large future room for funding at high cost effectiveness. This kind of long term optionality benefit isn’t typically included in an early stage BOTEC (because doing a full VOI is time consuming) and I think it’s somewhat underweighted in this report.
I no longer have access to the BOTECs to check (since I’m no longer at FP) and again I think the focus on BOTECs is a bit misplaced. I do want to briefly acknowledge though that I’m not sure that all of these are actually errors (but I still think it’s true that there are likely some BOTEC errors, and I think this would be true for many/ most orgs making small grants).
Hi Rosie, thanks for sharing your thoughts on this! It’s great to get the chance to clarify our decision-making process so it’s more transparent, in particular so readers can make their own judgement as to whether or not they agree with our reasoning about FP GHDF. Some one my thoughts on each of the points you raise:
We agree there is a positive track record for some of FP GHDF’s grants and this is one of the key countervailing considerations against our decision not to rely on FP GHDF in the report. Ultimately, we concluded that the instances of ‘hits’ we were aware of were not sufficient to conclude that we should rely on FP GHDF into the future. Some of our key reasons for this included:
These ‘hits’ seemed to fall into clusters for which we expect there is a limited supply of opportunities, e.g., several that went on to be supported by GiveWell were AIM-incubated charities. This means, we expect these opportunities to be less likely to be the kinds of opportunities that FP GHDF would fund on the margin with additional funding
We were not convinced that these successes would be replicated in the future under the new senior researcher (see our crux relating to consistency of the fund).
Ultimately, what we are trying to do is establish where the next dollar can be best spent by a donor. We agree it might not be worth it for a researcher to spend as much time on small grants, but this by itself should not be a justification for us to recommend small grants over large ones (agree point 3 can be a relevant consideration here though).
We agree that the relative value donors place on supporting early stage and riskier opportunities compared to more established orgs could be a crux here. However, we still needed a bar against which we could assess FP GHDF (i.e., we couldn’t have justifiably relied on FP GHDF on the basis of this difference in worldview, independent of the quality of FP GHDF’s grantmaking). As such, we tried to assess whether FP GHDF grant evaluations convincingly demonstrated that opportunities met their self-stated bar. As we have acknowledged in the report, just because we don’t think the grant evaluations convincingly show opportunities meet the bar, doesn’t mean they really don’t (e.g., the researcher may have considered information not included in the grant evaluation report). However, we can only assess on the basis of the information we reviewed.
Regarding our focus on the BOTECs potentially being misplaced, I want to be clear that we did review all of these grant evaluations in full, not just the BOTECs. If we thought the issues we identified in the BOTECs were sufficiently compensated for by reasoning included in the grant evaluations more generally this would have played a part in our decision-making. I think assessing how well the BOTECs demonstrate opportunities surpass Founders Pledge’s stated bar was a reasonable evaluation strategy because: a) As mentioned above, these BOTECs were highly decision relevant — grants were only made if BOTECs showed opportunities to surpass 10x GiveDirectly and we know of no instances where an opportunity scored above 10x GiveDirectly and would not have been eligible for FP GHDF funding. b) The BOTECs are where many of the researcher’s judgements are made explicit and so can be assessed. At least for the three evaluations we reviewed in detail, a significant fraction of the work in the grant evaluation was justifying inputs to the BOTECs. On the other point raised here, it is true that not all of the concerns we had with the BOTECs were errors. Some of our concerns related to inputs that seemed (to us) optimistic and were, in our view, insufficiently justified considering the decision-relevant effect they had on the overall BOTEC. While not errors, these made it more difficult for us to justifiably conclude that the FP GHDF grants were in expectation competitive with GiveWell.