“Id be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.”
This wouldn’t necessarily be concerning to me, if the wins are big enough. If you have a “hits based” approach then maybe 1 in 5 (or 1 in 10) huge wins is fine if you are getting enormous inpact from those.
I would LOVE to see a proper evaluation of ’hits based” funding from funders like OpenPhil and LTFF (I mentioned this a while back). To state the obvious a “hits based” only makes sense if you actually hit every now and then—are we hitting? I would hope also that there was a pre-labelling system of which grants were “hits based” so there wasn’t ex-ante cherry picking on evaluation either biasing towards success or failure.
One possibility would be for these orgs to pay an external evaluator to look at these, to reduce bias. Above someone mentioned 3-8% of org time could be spent on evaluations—how about something like 2% of the money. For LTFF Using 2% of the grant funds to fund an external evaluation of grant success at a million a year budget would be $60,000 to assess around 3 years of grants—I’m sure a very competent person could do a pretty good review in 4-6 months for that money.
“Id be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.”
This wouldn’t necessarily be concerning to me, if the wins are big enough. If you have a “hits based” approach then maybe 1 in 5 (or 1 in 10) huge wins is fine if you are getting enormous inpact from those.
I would LOVE to see a proper evaluation of ’hits based” funding from funders like OpenPhil and LTFF (I mentioned this a while back). To state the obvious a “hits based” only makes sense if you actually hit every now and then—are we hitting? I would hope also that there was a pre-labelling system of which grants were “hits based” so there wasn’t ex-ante cherry picking on evaluation either biasing towards success or failure.
One possibility would be for these orgs to pay an external evaluator to look at these, to reduce bias. Above someone mentioned 3-8% of org time could be spent on evaluations—how about something like 2% of the money. For LTFF Using 2% of the grant funds to fund an external evaluation of grant success at a million a year budget would be $60,000 to assess around 3 years of grants—I’m sure a very competent person could do a pretty good review in 4-6 months for that money.