When long-termist grant applications don’t get funded, the reason usually isn’t lack of funding, but one of the following:
The grantmaker was unable to vet the project (due to time constraints or lack of domain expertise) or at least thought it was a better fit for a different grantmaker.
The grantmaker thought the project came with a high risk of accidentalharm.
This post contains great ideas for resolving the former point, but doesn’t demonstrate high awareness of the latter concern. Awareness of these risks seems important to me, especially for funders: High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them. I’d be interested in your thoughts on that.
I really like that you’re providing feedback to applicants! In general, I wish the EA community was more proactive with providing critical feedback.
This post was intended as a grant application announcement post that also happened to contain some information about new funder-friendly and applicant-friendly policies we are adopting. I did not include any information about our evaluation process or risk reduction process in the body of the post, so I would not expect the post to convey high awareness of either reasons why long-termist applications don’t get funded.
I am curious what ideas we included you think address your first point about grantmakers being unable to vet the project. I’m not sure if application sharing, rolling applications, or providing feedback to grant applicants address your first or second points.
To elaborate more on risk, I wrote in another comment on this post that:
We have several layers of checks to help reduce risks and improve grant decision making including initial staff review of incoming applications, angels sharing their evaluations with one another and talking with external contacts/experts if appropriate, and hearing opinions of external grantmakers on grant applications we have received (we still need to talk with grantmakers to set this up).
I think that an initial staff review can help detect risks, and if we notice a large problem with downside risk in incoming projects, we can enhance the initial staff review process. The angel evaluation period is where a lot of nuanced considerations about risk can come up, since angels can share their perspectives on a grant proposal with other angels and external experts, and we have angels with significant experience in areas like meta and AI. Finally, this wasn’t mentioned in the post, but we are aiming to share evaluations both ways with funders in EA. I think this can go a long way towards making all funders aware of all of the potential risks of a project.
Angels in the group seem to actively avoid funding projects that they feel they are not qualified to evaluate. Angels can point out funding behavior that they perceive is risky from other angels, although from what I’ve seen, our angels lean more on the side of risk avoidance than anything else.
High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them. I’d be interested in your thoughts on that.
As Denise mentioned in a post on Jan’s project evaluation idea, there is a category of project that is “projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.” This lines up with many of the applications I am seeing. This might be different with long-term/x-risk projects specifically, but since we are a general funding group with individual EA funders with a wide variety of backgrounds and experiences, we are not receiving a large number of such applications relative to the entire pool of applications.
Therefore, I wouldn’t say that our applications are likely to be “skewed towards high-downside-risk projects.” I expect to continue to receive a large number of projects that may have very low impact just like other funders are likely receiving. As Oliver mentioned, “in practice I think people will have models that will output a net-positive impact or a net-negative impact, depending on certain facts that they have uncertainty about, and understanding those cruxes and uncertainties is the key thing in understanding whether a project will be worth working on.” I think that other EA funders will fund projects that match the model of the funders, but because people’s models differ wildly and are very likely wrong in many cases due to the high failure rate of funded startups for the most successful VCs, I don’t know if other funders are actually funding a significant fraction of the opportunities that end up having the highest impact.
To my understanding EA Grants is the only other funder that is funding general grants, with BERI Grants and EAF Fund focusing on long-term projects exclusively, and the EA Funds focusing on their respective areas and funding larger organizations as well. Since EA Grants is currently closed for applications (I support rolling applications rather than application rounds), we are receiving applications that have not been funded by other funders because the only other funder isn’t accepting applications right now. Since I support funder application sharing, with this method funders will be able to see the entire pool of proposals, rather than the pool without the projects other funders have funded. This will help each funder evaluate the quality of the projects they are funding relative to the quality of other projects that other funders have funded.
I really like that you’re providing feedback to applicants! In general, I wish the EA community was more proactive with providing critical feedback.
Thanks for the thorough response! I think I agree with what you said, and I think the process you mentioned seems adequate to address the risks (if implemented well).
My perception is that application sharing could help address vetting constraints because it allows other funders (who may have more expertise in a particular area) to help with vetting. I think other funders probably don’t have rolling applications because of the increased effort this entails, so in that sense rolling applications can also help resolve vetting constraints.
When long-termist grant applications don’t get funded, the reason usually isn’t lack of funding, but one of the following:
The grantmaker was unable to vet the project (due to time constraints or lack of domain expertise) or at least thought it was a better fit for a different grantmaker.
The grantmaker thought the project came with a high risk of accidental harm.
This post contains great ideas for resolving the former point, but doesn’t demonstrate high awareness of the latter concern. Awareness of these risks seems important to me, especially for funders: High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them. I’d be interested in your thoughts on that.
I really like that you’re providing feedback to applicants! In general, I wish the EA community was more proactive with providing critical feedback.
This post was intended as a grant application announcement post that also happened to contain some information about new funder-friendly and applicant-friendly policies we are adopting. I did not include any information about our evaluation process or risk reduction process in the body of the post, so I would not expect the post to convey high awareness of either reasons why long-termist applications don’t get funded.
I am curious what ideas we included you think address your first point about grantmakers being unable to vet the project. I’m not sure if application sharing, rolling applications, or providing feedback to grant applicants address your first or second points.
To elaborate more on risk, I wrote in another comment on this post that:
I think that an initial staff review can help detect risks, and if we notice a large problem with downside risk in incoming projects, we can enhance the initial staff review process. The angel evaluation period is where a lot of nuanced considerations about risk can come up, since angels can share their perspectives on a grant proposal with other angels and external experts, and we have angels with significant experience in areas like meta and AI. Finally, this wasn’t mentioned in the post, but we are aiming to share evaluations both ways with funders in EA. I think this can go a long way towards making all funders aware of all of the potential risks of a project.
Angels in the group seem to actively avoid funding projects that they feel they are not qualified to evaluate. Angels can point out funding behavior that they perceive is risky from other angels, although from what I’ve seen, our angels lean more on the side of risk avoidance than anything else.
As Denise mentioned in a post on Jan’s project evaluation idea, there is a category of project that is “projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.” This lines up with many of the applications I am seeing. This might be different with long-term/x-risk projects specifically, but since we are a general funding group with individual EA funders with a wide variety of backgrounds and experiences, we are not receiving a large number of such applications relative to the entire pool of applications.
Therefore, I wouldn’t say that our applications are likely to be “skewed towards high-downside-risk projects.” I expect to continue to receive a large number of projects that may have very low impact just like other funders are likely receiving. As Oliver mentioned, “in practice I think people will have models that will output a net-positive impact or a net-negative impact, depending on certain facts that they have uncertainty about, and understanding those cruxes and uncertainties is the key thing in understanding whether a project will be worth working on.” I think that other EA funders will fund projects that match the model of the funders, but because people’s models differ wildly and are very likely wrong in many cases due to the high failure rate of funded startups for the most successful VCs, I don’t know if other funders are actually funding a significant fraction of the opportunities that end up having the highest impact.
To my understanding EA Grants is the only other funder that is funding general grants, with BERI Grants and EAF Fund focusing on long-term projects exclusively, and the EA Funds focusing on their respective areas and funding larger organizations as well. Since EA Grants is currently closed for applications (I support rolling applications rather than application rounds), we are receiving applications that have not been funded by other funders because the only other funder isn’t accepting applications right now. Since I support funder application sharing, with this method funders will be able to see the entire pool of proposals, rather than the pool without the projects other funders have funded. This will help each funder evaluate the quality of the projects they are funding relative to the quality of other projects that other funders have funded.
Thanks! I completely agree.
Thanks for the thorough response! I think I agree with what you said, and I think the process you mentioned seems adequate to address the risks (if implemented well).
My perception is that application sharing could help address vetting constraints because it allows other funders (who may have more expertise in a particular area) to help with vetting. I think other funders probably don’t have rolling applications because of the increased effort this entails, so in that sense rolling applications can also help resolve vetting constraints.