Thanks very much for doing this useful work! This seems like the sort of project that should definitely exist, but basically inexplicably fails to come about until some random person decides to do it.
I hate to give you more work after you have perhaps already put in more time on this than everyone else combined, but I can think of two things that might make this even more useful:
Conclusions about the types of grants that performed well or badly, e.g.
Did they tend to be larger organisations or individuals?
Were they more speculative or have a concrete roadmap?
Were they research based, skill acquiring or community organising?
Comparisons to other granters.
A 30-40% success ratio isn’t that informative if readers don’t have a strong sense of what success means to you (because readers can’t see what you’ve classified as a failure), so we don’t know how good or bad this is for the LTFF. But if we could compare it to the other EA Funds, or to OpenPhil or GiveWell or the SFF, that could give useful context and help people decide which one to donate to.
Thanks! To answer the questions under the first bullet point:
Individuals performed better than organizations, but there weren’t that many organizations.
Individuals pursuing research directions mostly did legibly well, and the ones who didn’t do legibly well seem like they had less of a well-defined plan, as one might expect.
But some people with less defined directions also seem like they did well.
Also note that maybe I’m rating research directions which didn’t succeeded as less well defined.
I don’t actually have access to the applications, just to the grant blurbs and rationales
Grants to organize conferences and workshops generally delivered, and I imagine that they generally had more concrete roadmaps
There was only one upskilling grant.
In general, I think that the algorithm of looking at past similar grants and see if they succeed might be decently predictive for new grants, but that maybe isn’t captured by the distinctions above.
This work was meant to be built on. Hopefully there will be more similar work going forward (by both us and others), so much of the purpose here is to lay some foundation and help dip our toes into this sort of evaluation. (It can be controversial or harmful, so we’re going slowly). As such, ideas for improvement are most welcome!
I’ve read the larger review. I’d note that there were few groups that really surprised me. If you go through the list of grantees, and think about what you know of each candidate, I’d bet you can get a roughly similar sense. (This is true for those who read LW/EA Forum frequently). One of the main purposes of this sort of work is to either find or try and fail to find big surprises. From my perspective, I think that groups/individuals who had previously provided value (different from seeming prestigious, to be clear), went on to provide more value, and those that hadn’t didn’t do as well.
This work wasn’t done with the particular intention of helping to decide between EA Funds. We have been doing some other investigation here, somewhat accidentally (I’ve been assisting a donor lottery winner to decide). It’s a good thing to keep in mind going forward.
It would be great to later have measures of total impact for longtermism. We don’t have strong measures now, but would love to help develop these (or further encourage others to).
Thanks very much for doing this useful work! This seems like the sort of project that should definitely exist, but basically inexplicably fails to come about until some random person decides to do it.
I hate to give you more work after you have perhaps already put in more time on this than everyone else combined, but I can think of two things that might make this even more useful:
Conclusions about the types of grants that performed well or badly, e.g.
Did they tend to be larger organisations or individuals?
Were they more speculative or have a concrete roadmap?
Were they research based, skill acquiring or community organising?
Comparisons to other granters.
A 30-40% success ratio isn’t that informative if readers don’t have a strong sense of what success means to you (because readers can’t see what you’ve classified as a failure), so we don’t know how good or bad this is for the LTFF. But if we could compare it to the other EA Funds, or to OpenPhil or GiveWell or the SFF, that could give useful context and help people decide which one to donate to.
Thanks! To answer the questions under the first bullet point:
Individuals performed better than organizations, but there weren’t that many organizations.
Individuals pursuing research directions mostly did legibly well, and the ones who didn’t do legibly well seem like they had less of a well-defined plan, as one might expect.
But some people with less defined directions also seem like they did well.
Also note that maybe I’m rating research directions which didn’t succeeded as less well defined.
I don’t actually have access to the applications, just to the grant blurbs and rationales
Grants to organize conferences and workshops generally delivered, and I imagine that they generally had more concrete roadmaps
There was only one upskilling grant.
In general, I think that the algorithm of looking at past similar grants and see if they succeed might be decently predictive for new grants, but that maybe isn’t captured by the distinctions above.
Some quick thoughts:
This work was meant to be built on. Hopefully there will be more similar work going forward (by both us and others), so much of the purpose here is to lay some foundation and help dip our toes into this sort of evaluation. (It can be controversial or harmful, so we’re going slowly). As such, ideas for improvement are most welcome!
I’ve read the larger review. I’d note that there were few groups that really surprised me. If you go through the list of grantees, and think about what you know of each candidate, I’d bet you can get a roughly similar sense. (This is true for those who read LW/EA Forum frequently). One of the main purposes of this sort of work is to either find or try and fail to find big surprises. From my perspective, I think that groups/individuals who had previously provided value (different from seeming prestigious, to be clear), went on to provide more value, and those that hadn’t didn’t do as well.
This work wasn’t done with the particular intention of helping to decide between EA Funds. We have been doing some other investigation here, somewhat accidentally (I’ve been assisting a donor lottery winner to decide). It’s a good thing to keep in mind going forward.
It would be great to later have measures of total impact for longtermism. We don’t have strong measures now, but would love to help develop these (or further encourage others to).