EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. He hope that these evaluations will help us better score a grantmaker’s track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.
I’d like to expand on this: a think-tank/paper that formulates a way of evaluating all grants by a set of objective, quantifiable, criteria. This in order to better inform future allocation decisions so that each dollar spent ends up making the greatest impact possible.
In this respect Retrospective Grant Evaluations, is but one variable to measure grant effectiveness.
I have a few more ideas that can be combined to create some kind of weighted scoring mechanism for grant evaluation:
Social return on investment (SROI). Arriving at a set of non-monetary variables to quantify social impact
Cost effective analysis. GiveWell is a leader in this. We could consider applying some of their key learnings from the non-for-profit space to EA projects
Horizon Scanning. Governmental bodies have departments that perform this kind of work. A proposal could be assessed by it’s alignment with emerging technology forecasts
Backcasting. Seek out ventures that are working towards a desirable future goal
Pareto optimal. Penalize ideas that could have potential negative impact on factors/people outside of the intended target audience.
Competence and track record. Prioritize grant allocators/judges based on previous successful grants. Prioritize grants to founder or organizations with a proven track record of competence
Obviously this list could go on and this is just a small number of possible variables. The idea is simply to build a model that can score the utility of a proposed grant.
Is this neglecting the notion that some of the grants are to strategically develop interest by presentation appealing to different decisionmakers, since the objectives are rather already known, such as improve lives of humans and animals in the long term and prevent actors, including those who use and develop AI to reduce the wellbeing of these individuals? It can be a bit of a reputational loss risk to evaluate ‘well, we started convincing the government to focus on the long term by appealing by the extent of the future so now we can start talking about the quality of life in various geographies, and if this goes well then we move onto the advancement of animal-positive systems across the spacetime?’
Retrospective grant evaluations
Research That Can Help Us Improve
I’d like to expand on this: a think-tank/paper that formulates a way of evaluating all grants by a set of objective, quantifiable, criteria. This in order to better inform future allocation decisions so that each dollar spent ends up making the greatest impact possible.
In this respect Retrospective Grant Evaluations, is but one variable to measure grant effectiveness.
I have a few more ideas that can be combined to create some kind of weighted scoring mechanism for grant evaluation:
Social return on investment (SROI). Arriving at a set of non-monetary variables to quantify social impact
Cost effective analysis. GiveWell is a leader in this. We could consider applying some of their key learnings from the non-for-profit space to EA projects
Horizon Scanning. Governmental bodies have departments that perform this kind of work. A proposal could be assessed by it’s alignment with emerging technology forecasts
Backcasting. Seek out ventures that are working towards a desirable future goal
Pareto optimal. Penalize ideas that could have potential negative impact on factors/people outside of the intended target audience.
Competence and track record. Prioritize grant allocators/judges based on previous successful grants. Prioritize grants to founder or organizations with a proven track record of competence
Obviously this list could go on and this is just a small number of possible variables. The idea is simply to build a model that can score the utility of a proposed grant.
Is this neglecting the notion that some of the grants are to strategically develop interest by presentation appealing to different decisionmakers, since the objectives are rather already known, such as improve lives of humans and animals in the long term and prevent actors, including those who use and develop AI to reduce the wellbeing of these individuals? It can be a bit of a reputational loss risk to evaluate ‘well, we started convincing the government to focus on the long term by appealing by the extent of the future so now we can start talking about the quality of life in various geographies, and if this goes well then we move onto the advancement of animal-positive systems across the spacetime?’