Doing random draws of the highest impact grants and making a few grant makers evaluate them without interacting seems like an easy (but expensive) test. I expect grant makers to talk enthusiastically with their colleagues about their top grants, so grant makers might already know if this idea is worthwhile or not. But yes, if agreement is low, grant makers should try to know each others tastes, and forward grants they think another grant maker would think of as high impact. If grant makers spend a lot of time training specific world views this seems like a more important thing to explore.
It might be worth checking if people sometimes mistake high impact grants for marginal grants. Another small random draw would probably be decent here too (Sharing experiences of high impact grants with your colleagues is a mechanism prone to a lot of selection effects, so it might be worth checking if this causes considerable oversight)
I expect none of this to be new/interesting for LTFF
(Feedback about writing style and content is much appreciated, as this is my first real comment on the EA forum)
I think determining the correlation between the ex-ante assessment of the grants, and the ex-post impact could be worth it. This could be restricted to e.g. the top 25 % of the grants which were made, as these are supposedly the most impactful.
Doing random draws of the highest impact grants and making a few grant makers evaluate them without interacting seems like an easy (but expensive) test. I expect grant makers to talk enthusiastically with their colleagues about their top grants, so grant makers might already know if this idea is worthwhile or not.
But yes, if agreement is low, grant makers should try to know each others tastes, and forward grants they think another grant maker would think of as high impact. If grant makers spend a lot of time training specific world views this seems like a more important thing to explore.
It might be worth checking if people sometimes mistake high impact grants for marginal grants. Another small random draw would probably be decent here too (Sharing experiences of high impact grants with your colleagues is a mechanism prone to a lot of selection effects, so it might be worth checking if this causes considerable oversight)
I expect none of this to be new/interesting for LTFF
(Feedback about writing style and content is much appreciated, as this is my first real comment on the EA forum)
I liked the comment. Welcome!
I think determining the correlation between the ex-ante assessment of the grants, and the ex-post impact could be worth it. This could be restricted to e.g. the top 25 % of the grants which were made, as these are supposedly the most impactful.
Randomization is definitely bold!