This got me wondering: how much agreement is there between grantmakers (assuming they already share some broad philosophical assumptions)?
Because, if the top grants are much better than the marginal grants, and grantmakers would agree over what those are, then you could replace the ‘extremely busy’ grantmakers with less busy ones. The less busy ones would award approximately the same grants but be able to spend more time investigating marginal giving feedback.
I’m concerned about the scenario where (nearly) all grantmakers are too busy to give feedback and applicants don’t improve their projects.
Doing random draws of the highest impact grants and making a few grant makers evaluate them without interacting seems like an easy (but expensive) test. I expect grant makers to talk enthusiastically with their colleagues about their top grants, so grant makers might already know if this idea is worthwhile or not. But yes, if agreement is low, grant makers should try to know each others tastes, and forward grants they think another grant maker would think of as high impact. If grant makers spend a lot of time training specific world views this seems like a more important thing to explore.
It might be worth checking if people sometimes mistake high impact grants for marginal grants. Another small random draw would probably be decent here too (Sharing experiences of high impact grants with your colleagues is a mechanism prone to a lot of selection effects, so it might be worth checking if this causes considerable oversight)
I expect none of this to be new/interesting for LTFF
(Feedback about writing style and content is much appreciated, as this is my first real comment on the EA forum)
I think determining the correlation between the ex-ante assessment of the grants, and the ex-post impact could be worth it. This could be restricted to e.g. the top 25 % of the grants which were made, as these are supposedly the most impactful.
This got me wondering: how much agreement is there between grantmakers (assuming they already share some broad philosophical assumptions)?
I wonder if “grantmakers” is the wrong level of abstraction here. I think LTFF grantmakers usually agree much more often than they disagree about top grants, and there is usually agreement about other grants too. I think (having not been involved in the selection process) the agreement is in part due to sharing similar opinions because we’re (we think) correct and in part because similar judgement/reasoning process/etc is somewhat selected for.
I suspect there is similar (but probably lower?) correlations with other “good” longtermist grantmakers who do generalist grant eval work. I think some longtermist grantmakers (e.g. a subset of Open Phil Program Officers) specialize really deeply in a subfield, such that they have (relatively) deep specialist knowledge of some fields and are able to do a lot more active grantmaking/steering of that subfield. So they’ll be able to spot top grant opportunities that we normally can’t.
Because, if the top grants are much better than the marginal grants, and grantmakers would agree over what those are, then you could replace the ‘extremely busy’ grantmakers with less busy ones.
I suspect we have different empirical views on how busy (or more precisely, how high the counterfactual value of time) the “less busy” good grantmakers are. But in broad strokes, I think what you say is correct and is a direction that many funders are moving towards; I think the bar for becoming a grantmaker in EA has gone down a bunch in the last few years. E.g. 1) I don’t think I would have qualified as a grantmaker 2 years ago, 2) Open Phil appears to be increasing hiring significantly, 3) many new part-time grantmakers with the Future Fund regranting program, etc.
The less busy ones would award approximately the same grants but be able to spend more time investigating marginal giving feedback.
I agree that this will probably be better than the status quo. However, naively if you set up the infrastructure to do this well, you’d also have set up the infrastructure to do more counterfactually valuable activities (give more grants, give more advice to the top grantees, do more active grantmaking, etc).
This got me wondering: how much agreement is there between grantmakers (assuming they already share some broad philosophical assumptions)?
Because, if the top grants are much better than the marginal grants, and grantmakers would agree over what those are, then you could replace the ‘extremely busy’ grantmakers with less busy ones. The less busy ones would award approximately the same grants but be able to spend more time investigating marginal giving feedback.
I’m concerned about the scenario where (nearly) all grantmakers are too busy to give feedback and applicants don’t improve their projects.
Doing random draws of the highest impact grants and making a few grant makers evaluate them without interacting seems like an easy (but expensive) test. I expect grant makers to talk enthusiastically with their colleagues about their top grants, so grant makers might already know if this idea is worthwhile or not.
But yes, if agreement is low, grant makers should try to know each others tastes, and forward grants they think another grant maker would think of as high impact. If grant makers spend a lot of time training specific world views this seems like a more important thing to explore.
It might be worth checking if people sometimes mistake high impact grants for marginal grants. Another small random draw would probably be decent here too (Sharing experiences of high impact grants with your colleagues is a mechanism prone to a lot of selection effects, so it might be worth checking if this causes considerable oversight)
I expect none of this to be new/interesting for LTFF
(Feedback about writing style and content is much appreciated, as this is my first real comment on the EA forum)
I liked the comment. Welcome!
I think determining the correlation between the ex-ante assessment of the grants, and the ex-post impact could be worth it. This could be restricted to e.g. the top 25 % of the grants which were made, as these are supposedly the most impactful.
Randomization is definitely bold!
I wonder if “grantmakers” is the wrong level of abstraction here. I think LTFF grantmakers usually agree much more often than they disagree about top grants, and there is usually agreement about other grants too. I think (having not been involved in the selection process) the agreement is in part due to sharing similar opinions because we’re (we think) correct and in part because similar judgement/reasoning process/etc is somewhat selected for.
I suspect there is similar (but probably lower?) correlations with other “good” longtermist grantmakers who do generalist grant eval work. I think some longtermist grantmakers (e.g. a subset of Open Phil Program Officers) specialize really deeply in a subfield, such that they have (relatively) deep specialist knowledge of some fields and are able to do a lot more active grantmaking/steering of that subfield. So they’ll be able to spot top grant opportunities that we normally can’t.
I suspect we have different empirical views on how busy (or more precisely, how high the counterfactual value of time) the “less busy” good grantmakers are. But in broad strokes, I think what you say is correct and is a direction that many funders are moving towards; I think the bar for becoming a grantmaker in EA has gone down a bunch in the last few years. E.g. 1) I don’t think I would have qualified as a grantmaker 2 years ago, 2) Open Phil appears to be increasing hiring significantly, 3) many new part-time grantmakers with the Future Fund regranting program, etc.
I agree that this will probably be better than the status quo. However, naively if you set up the infrastructure to do this well, you’d also have set up the infrastructure to do more counterfactually valuable activities (give more grants, give more advice to the top grantees, do more active grantmaking, etc).