With seven full years of funding on record, I believe a thorough evaluation of previous grants is needed. Even if the grants were provided with no strings attached, it is important to assess, from a broad perspective, whether they achieved their intended objectives.
This sounds plausible. Such evaluation involves time costs, but could yield valuable info about the reliability of the fundās grantmaking decisions, whether the āhitsā sufficiently compensate for the ādudsā, and whether there are patterns among the duds that might helpfully inform future grantmaking.
Iād be a bit surprised if there wasnāt already a process in place for retrospective analysis of this sort. Is there any public info available about if/āhow EA Funds do this?
Would such a game āpositively influence the long-term trajectory of civilization,ā as described by the Long-Term Future Fund? For context, Rob Milesās videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
Iām a bit wary of picking out weird-sounding proposals as āobviouslyā ex ante duds. Presumably a lot of the ādigital contentā grants were aimed at raising public awareness of key longtermist issues (e.g. AI safety), and it seems prima facie reasonable to think both that (i) a computer game could reach a different audience from youtube videos, and (ii) raising awareness of key longtermist issues is a helpful first step for making broader progress on them.
For people who disagree with (ii), I think a more general post critiquing the very ideas of movement-building and raising awareness as valuable strategies could be interesting (and maybe more productive than picking out particular attempts that just āsound weirdā to a general audience)?
Iād be a bit surprised if there wasnāt already a process in place for retrospective analysis of this sort. Is there any public info available about if/āhow EA Funds do this?
When I looked at this as part of the 2022 red teaming contest, I found that āEA Funds has received roughly $50 million in donations and has made hundreds of grants, but has never published any post-grant assessments.ā Iām almost positive there havenāt been any retrospective analyses of EA Funds grants since then.
This problem isnāt unique to EA Funds. I also found that EA Grants and the Community Building Grants program both lacked any kind of public post grant assessment.
The unfortunate result of this situation is that while lots of time and money have been invested in various grantmaking programs, we donāt really know much about what types of grantmaking are most effective (e.g. granting to individuals vs. established organizations). Itās true that post-grant assessment is costly to conduct, but itās disappointing that we havenāt made this investment which could significantly improve the efficacy of future grantmaking.
I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.
Wow, I didnāt realize that evaluation existed! Thanks for sharing! (Though given that this evaluation only covers ~2 dozen grants for one fund, I think my overall assessment that thereās little in the way of post-grant evaluation still holds).
Self-assessment via a simple google form is an interesting idea. My initial reaction is that it would be hard to structure the incentives well enough for me to trust self-assessments. But it still could be better than nothing. Iād be more excited about third party evaluations (like the one you shared) even if they were extremely cursory (e.g. flagging which ones have evidence that the project was even executed vs. those that donāt) and selective (e.g. ignoring small grants to save time/āeffort).
Yeah, to be clear, I am also quite sad about this. If I had more time next to my other responsibilities, I think doing better public retrospectives on grants the LTFF made would be one of my top things to do.
āId be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.ā
This wouldnāt necessarily be concerning to me, if the wins are big enough. If you have a āhits basedā approach then maybe 1 in 5 (or 1 in 10) huge wins is fine if you are getting enormous inpact from those.
I would LOVE to see a proper evaluation of āhits basedā funding from funders like OpenPhil and LTFF (I mentioned this a while back). To state the obvious a āhits basedā only makes sense if you actually hit every now and thenāare we hitting? I would hope also that there was a pre-labelling system of which grants were āhits basedā so there wasnāt ex-ante cherry picking on evaluation either biasing towards success or failure.
One possibility would be for these orgs to pay an external evaluator to look at these, to reduce bias. Above someone mentioned 3-8% of org time could be spent on evaluationsāhow about something like 2% of the money. For LTFF Using 2% of the grant funds to fund an external evaluation of grant success at a million a year budget would be $60,000 to assess around 3 years of grantsāIām sure a very competent person could do a pretty good review in 4-6 months for that money.
This sounds plausible. Such evaluation involves time costs, but could yield valuable info about the reliability of the fundās grantmaking decisions, whether the āhitsā sufficiently compensate for the ādudsā, and whether there are patterns among the duds that might helpfully inform future grantmaking.
Iād be a bit surprised if there wasnāt already a process in place for retrospective analysis of this sort. Is there any public info available about if/āhow EA Funds do this?
Iām a bit wary of picking out weird-sounding proposals as āobviouslyā ex ante duds. Presumably a lot of the ādigital contentā grants were aimed at raising public awareness of key longtermist issues (e.g. AI safety), and it seems prima facie reasonable to think both that (i) a computer game could reach a different audience from youtube videos, and (ii) raising awareness of key longtermist issues is a helpful first step for making broader progress on them.
For people who disagree with (ii), I think a more general post critiquing the very ideas of movement-building and raising awareness as valuable strategies could be interesting (and maybe more productive than picking out particular attempts that just āsound weirdā to a general audience)?
When I looked at this as part of the 2022 red teaming contest, I found that āEA Funds has received roughly $50 million in donations and has made hundreds of grants, but has never published any post-grant assessments.ā Iām almost positive there havenāt been any retrospective analyses of EA Funds grants since then.
This problem isnāt unique to EA Funds. I also found that EA Grants and the Community Building Grants program both lacked any kind of public post grant assessment.
The unfortunate result of this situation is that while lots of time and money have been invested in various grantmaking programs, we donāt really know much about what types of grantmaking are most effective (e.g. granting to individuals vs. established organizations). Itās true that post-grant assessment is costly to conduct, but itās disappointing that we havenāt made this investment which could significantly improve the efficacy of future grantmaking.
There was an LTFF evaluation a few years ago.
I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.
Wow, I didnāt realize that evaluation existed! Thanks for sharing! (Though given that this evaluation only covers ~2 dozen grants for one fund, I think my overall assessment that thereās little in the way of post-grant evaluation still holds).
Self-assessment via a simple google form is an interesting idea. My initial reaction is that it would be hard to structure the incentives well enough for me to trust self-assessments. But it still could be better than nothing. Iād be more excited about third party evaluations (like the one you shared) even if they were extremely cursory (e.g. flagging which ones have evidence that the project was even executed vs. those that donāt) and selective (e.g. ignoring small grants to save time/āeffort).
Yeah, to be clear, I am also quite sad about this. If I had more time next to my other responsibilities, I think doing better public retrospectives on grants the LTFF made would be one of my top things to do.
āId be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.ā
This wouldnāt necessarily be concerning to me, if the wins are big enough. If you have a āhits basedā approach then maybe 1 in 5 (or 1 in 10) huge wins is fine if you are getting enormous inpact from those.
I would LOVE to see a proper evaluation of āhits basedā funding from funders like OpenPhil and LTFF (I mentioned this a while back). To state the obvious a āhits basedā only makes sense if you actually hit every now and thenāare we hitting? I would hope also that there was a pre-labelling system of which grants were āhits basedā so there wasnāt ex-ante cherry picking on evaluation either biasing towards success or failure.
One possibility would be for these orgs to pay an external evaluator to look at these, to reduce bias. Above someone mentioned 3-8% of org time could be spent on evaluationsāhow about something like 2% of the money. For LTFF Using 2% of the grant funds to fund an external evaluation of grant success at a million a year budget would be $60,000 to assess around 3 years of grantsāIām sure a very competent person could do a pretty good review in 4-6 months for that money.