Sometimes, there is a reason other grantmakers aren’t funding a fairly well-known EA (-adjacent) project.
This post is written in a professional capacity, as a volunteer/sometimes contractor for EA Funds’ Long-Term Future Fund(LTFF), which is a fiscally sponsored project of Effective Ventures Foundation (UK) and Effective Ventures Foundation USA Inc. I am not and have never been an employee at either Effective Ventures entity. Opinions are my own and do not necessarily represent that of any of my employers or of either Effective Ventures entity. I originally wanted to make this post a personal shortform, but Caleb Parikh encouraged me to make it a top-level post instead.
There is an increasing number of new grantmakers popping up, and also some fairly rich donors in longtermist EA that are thinking of playing a more active role in their own giving (instead of deferring). I am broadly excited about the diversification of funding in longtermist EA. There are many advantages of having a diverse pool of funding:
Potentially increases financial stability of projects and charities
Allows for a diversification of worldviews
Encourages accountability, particularly of donors and grantmakers – if there’s only one or a few funders, people might be scared of offering justified criticisms
Access to more or better networks – more diverse grantmakers might mean access to a greater diversity of networks, allowing otherwise overlooked and potentially extremely high-impact projects to be funded
Greater competition and race to excellence and speed among grantmakers – I’ve personally been on both sides of being faster and much slower than other grantmakers, and it’s helpful to have a competitive ecosystem to improve grantee and/or donor experience
However, this comment will mostly talk about the disadvantages. I want to address adverse selection: In particular, if a project that you’ve heard of through normal EA channels[1] hasn’t been funded by existing grantmakers like LTFF, there is a decently high likelihood that other grantmakers have already evaluated the grant and (sometimes for sensitive private reasons) have decided it is not worth funding.
Reasons against broadly sharing reasons for rejection
From my perspective as an LTFF grantmaker, it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection. For example:
Our assessments may include private information that we are not able to share with other funders.
Writing up our reasons for rejection of specific projects may be time-consuming, politically unwise, and/or encourage additional ire (“punching down”).
We don’t want to reify our highly subjective choices too much, and public writeups of rejections can cause informational cascades.
Often other funders don’t even think to ask about whether the project has already been rejected by us, and why (and/or rejected grantees don’t pass on that they’ve been rejected by another funder).
Sharing negative information about applicants would make applying to EA Funds more costly and could discourage promising applicants.
Select examples
Here are some (highly) anonymized examples of grants I have personally observed being rejected by a centralized grantmaker. For further anonymization, in some cases I’ve switched details around or collapsed multiple examples into one. Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors.
An academic wants funding for a promising sounding existential safety research intervention in an area of study that none of the LTFF grantmakers are familiar with. I asked around until I found a respected scientist in an adjacent field. The scientist told me that the work the applicant wanted funding for is a common line of inquiry in that subfield, not a new line of research as the application claimed.
Someone’s application has a lot of buzzwords and a few endorsements from community members, but after reading carefully about the work, talking it over, and thinking it through, my colleagues and I cannot tell how the work is meaningfully different from other forms of ML capabilities research.
An applicant wants to re-evaluate AI safety from an academic perspective that’s extremely under-represented in AI safety, longtermism, and/or EA. I asked around until I found an acquaintance in that general field that colleagues can vouch for. The acquaintance told me that they’re not familiar with the applicant’s specific subfield, but by standard metrics of their field, the applicant’s published work was lacking in rigor.
An applicant has a promising-sounding application and sounds smart, but we’ve funded them before for a research grant and gotten no results (including negative results) and received no explanation for the lack of results.
An application sounds promising but we’ve funded them before for a research grant and we thought the results are sufficiently mediocre that scarce resources are better used elsewhere (this is maybe the most common reason for rejection on this list)
An application was flagged as being rejected by a different longtermist grantmaker. I asked the other grantmaker for assistance and they mentioned serious issues with the project lead’s professional competency, which is a problem as the field they (want to) work in is quite sensitive.
An application sounds promising but one of the other LTFF fund managers flagged rumors about the applicant. I conducted my own investigation and concluded that the applicant has enough integrity or character issues or red flags that I’m not comfortable with recommending funding to them (for example, credible evidence of plagiarism, faking data, interpersonal harm in a professional setting, or not fulfilling contractual obligations).
An application sounds promising but I was a bit concerned about a few yellow flags in the grantee’s history and portfolio. I attempted to investigate further, but learned soon after that a different grantmaker has already funded them, without (as far as I can tell) doing the same due diligence.
I’ve since followed up and am reasonably sure that none of my worries materialized. This is a good example of how an abundance of caution can be excessively costly or net negative.
Another LTFF fund manager talked to multiple donors who said things like “I funded this because I was confident that the LTFF would fund it, but I could do it more quickly.” The fund manager investigated the grants in question, and found that in several cases the LTFF had already rejected some of the projects, and in some other cases, the fund manager was quite skeptical they’d be above the LTFF’s funding bar.
Some tradeoffs and other considerations
Note that I’ve selected these examples in part due to relevance to downside risks, or otherwise for being interesting. However, the primary reason projects get rejected by LTFF and other funders is the perception that the expected outcomes don’t justify the expenses. We can, of course, make mistakes in these evaluations. I welcome differing opinions regarding our evaluations and funding choices. Assuming projects are always adversely selected is also quite risky as the EA funding landscape is far from efficient.
Broadly speaking, in the current climate it is hard for new grantmakers to know whether a grant application was a) not looked at by other grantmakers, b) rejected for bad reasons, c) rejected for reasons orthogonal to the new grantmakers’ interest, or d) rejected for good reasons. Leaning to the side of always funding projects that object-level appear to have high positive impact run into unilateralist curse considerations, as well as straightforwardly wasting money. On the other hand, grantmakers are far from perfect and do make errors; well-coordinated grantmakers might be more likely to make correlated errors. So you might expect a network of independent funders to increase the odds that unusual-but-great projects won’t be overlooked.
I’m not exactly sure how to navigate the tradeoffs. I mention salient costs above but of course centralization is also very dangerous. Comments are welcome.
as opposed to e.g. a project you heard of through very private networks, which means they are less likely to have applied to any of the existing funds.
Select examples of adverse selection in longtermist grantmaking
Sometimes, there is a reason other grantmakers aren’t funding a fairly well-known EA (-adjacent) project.
This post is written in a professional capacity, as a volunteer/sometimes contractor for EA Funds’ Long-Term Future Fund (LTFF), which is a fiscally sponsored project of Effective Ventures Foundation (UK) and Effective Ventures Foundation USA Inc. I am not and have never been an employee at either Effective Ventures entity. Opinions are my own and do not necessarily represent that of any of my employers or of either Effective Ventures entity. I originally wanted to make this post a personal shortform, but Caleb Parikh encouraged me to make it a top-level post instead.
There is an increasing number of new grantmakers popping up, and also some fairly rich donors in longtermist EA that are thinking of playing a more active role in their own giving (instead of deferring). I am broadly excited about the diversification of funding in longtermist EA. There are many advantages of having a diverse pool of funding:
Potentially increases financial stability of projects and charities
Allows for a diversification of worldviews
Encourages accountability, particularly of donors and grantmakers – if there’s only one or a few funders, people might be scared of offering justified criticisms
Access to more or better networks – more diverse grantmakers might mean access to a greater diversity of networks, allowing otherwise overlooked and potentially extremely high-impact projects to be funded
Greater competition and race to excellence and speed among grantmakers – I’ve personally been on both sides of being faster and much slower than other grantmakers, and it’s helpful to have a competitive ecosystem to improve grantee and/or donor experience
However, this comment will mostly talk about the disadvantages. I want to address adverse selection: In particular, if a project that you’ve heard of through normal EA channels[1] hasn’t been funded by existing grantmakers like LTFF, there is a decently high likelihood that other grantmakers have already evaluated the grant and (sometimes for sensitive private reasons) have decided it is not worth funding.
Reasons against broadly sharing reasons for rejection
From my perspective as an LTFF grantmaker, it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection. For example:
Our assessments may include private information that we are not able to share with other funders.
Writing up our reasons for rejection of specific projects may be time-consuming, politically unwise, and/or encourage additional ire (“punching down”).
We don’t want to reify our highly subjective choices too much, and public writeups of rejections can cause informational cascades.
Often other funders don’t even think to ask about whether the project has already been rejected by us, and why (and/or rejected grantees don’t pass on that they’ve been rejected by another funder).
Sharing negative information about applicants would make applying to EA Funds more costly and could discourage promising applicants.
Select examples
Here are some (highly) anonymized examples of grants I have personally observed being rejected by a centralized grantmaker. For further anonymization, in some cases I’ve switched details around or collapsed multiple examples into one. Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors.
An academic wants funding for a promising sounding existential safety research intervention in an area of study that none of the LTFF grantmakers are familiar with. I asked around until I found a respected scientist in an adjacent field. The scientist told me that the work the applicant wanted funding for is a common line of inquiry in that subfield, not a new line of research as the application claimed.
Someone’s application has a lot of buzzwords and a few endorsements from community members, but after reading carefully about the work, talking it over, and thinking it through, my colleagues and I cannot tell how the work is meaningfully different from other forms of ML capabilities research.
An applicant wants to re-evaluate AI safety from an academic perspective that’s extremely under-represented in AI safety, longtermism, and/or EA. I asked around until I found an acquaintance in that general field that colleagues can vouch for. The acquaintance told me that they’re not familiar with the applicant’s specific subfield, but by standard metrics of their field, the applicant’s published work was lacking in rigor.
An applicant has a promising-sounding application and sounds smart, but we’ve funded them before for a research grant and gotten no results (including negative results) and received no explanation for the lack of results.
An application sounds promising but we’ve funded them before for a research grant and we thought the results are sufficiently mediocre that scarce resources are better used elsewhere (this is maybe the most common reason for rejection on this list)
An application was flagged as being rejected by a different longtermist grantmaker. I asked the other grantmaker for assistance and they mentioned serious issues with the project lead’s professional competency, which is a problem as the field they (want to) work in is quite sensitive.
An application sounds promising but one of the other LTFF fund managers flagged rumors about the applicant. I conducted my own investigation and concluded that the applicant has enough integrity or character issues or red flags that I’m not comfortable with recommending funding to them (for example, credible evidence of plagiarism, faking data, interpersonal harm in a professional setting, or not fulfilling contractual obligations).
An application sounds promising but I was a bit concerned about a few yellow flags in the grantee’s history and portfolio. I attempted to investigate further, but learned soon after that a different grantmaker has already funded them, without (as far as I can tell) doing the same due diligence.
I’ve since followed up and am reasonably sure that none of my worries materialized. This is a good example of how an abundance of caution can be excessively costly or net negative.
Another LTFF fund manager talked to multiple donors who said things like “I funded this because I was confident that the LTFF would fund it, but I could do it more quickly.” The fund manager investigated the grants in question, and found that in several cases the LTFF had already rejected some of the projects, and in some other cases, the fund manager was quite skeptical they’d be above the LTFF’s funding bar.
Some tradeoffs and other considerations
Note that I’ve selected these examples in part due to relevance to downside risks, or otherwise for being interesting. However, the primary reason projects get rejected by LTFF and other funders is the perception that the expected outcomes don’t justify the expenses. We can, of course, make mistakes in these evaluations. I welcome differing opinions regarding our evaluations and funding choices. Assuming projects are always adversely selected is also quite risky as the EA funding landscape is far from efficient.
Broadly speaking, in the current climate it is hard for new grantmakers to know whether a grant application was a) not looked at by other grantmakers, b) rejected for bad reasons, c) rejected for reasons orthogonal to the new grantmakers’ interest, or d) rejected for good reasons. Leaning to the side of always funding projects that object-level appear to have high positive impact run into unilateralist curse considerations, as well as straightforwardly wasting money. On the other hand, grantmakers are far from perfect and do make errors; well-coordinated grantmakers might be more likely to make correlated errors. So you might expect a network of independent funders to increase the odds that unusual-but-great projects won’t be overlooked.
I’m not exactly sure how to navigate the tradeoffs. I mention salient costs above but of course centralization is also very dangerous. Comments are welcome.
as opposed to e.g. a project you heard of through very private networks, which means they are less likely to have applied to any of the existing funds.