Similarly to LTFF, we solicit applications via an open process advertised on relevant sites, Facebook groups, and by individually reaching out to promising candidates. Additionally, we create an RFP and distribute it accordingly, which I believe LTFF decided not to do. Although similarly to LTFF, at AWF applications are initially triaged, rejecting applications that are out of scope or clearly below the bar for funding, we reject <5% instead of 40% of applications at that stage. The remaining applications are assigned to a primary and secondary fund manager with relevant, compatible expertise.
From the LTFF:
The assigned fund manager will read the application in detail, and often reaches out to interview the applicant or ask clarifying questions. In addition, they may read prior work produced by the applicant, reach out to the applicantâs references, or consult external experts in the area. They produce a brief write-up summarizing their thinking, and assign a vote to the application.
This is applicable to AWF as well. However, before the primary reviewer assigns their vote, they notify the secondary reviewer and ask for their input. Weâre also a bit less likely to reach out to interview the applicant.
What follows is voting by all fund managers. As outlined in another question by Marcus, we grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and weâve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to â5, with +5 being the strongest possible endorsement of positive impact, and â5 being a grant with an anti-endorsement thatâs actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grants that are around the threshold of 2.5 unless anyone wanted to actively make the case for or against something outside of these bounds (the size and scope of other grants, particularly the large grants we approve, is also discussed).
Similarly to LTFF,
we provide feedback to a subset of applications (both approved and rejected) where we believe our perspective could be particularly beneficial for the applicantâs work in the future,
however, we only provide feedback if asked by a grantee.
We donât have any immediate plans to write a longer post about the process outside of this AMA. However, we are generally planning to increase communication of the fundâs approach, so that is something we could potentially draft in the future unless other higher priority write-up will take precedence.
It seems to me like you might now be like 80% of the way to a write-up like the LTFFâs one with this comment of yours, haha. Maybe itâd be easy enough to just lightly edit that into a google doc framed as âwhat AWF doesâ rather than âhow AWF differs from LTFFâ, and then link to that from the fundâs page or future posts?
(I donât really have a stake in thisâjust sharing a thought that occurred to me.)
Similarly to LTFF, we solicit applications via an open process advertised on relevant sites, Facebook groups, and by individually reaching out to promising candidates. Additionally, we create an RFP and distribute it accordingly, which I believe LTFF decided not to do. Although similarly to LTFF, at AWF applications are initially triaged, rejecting applications that are out of scope or clearly below the bar for funding, we reject <5% instead of 40% of applications at that stage. The remaining applications are assigned to a primary and secondary fund manager with relevant, compatible expertise.
From the LTFF:
This is applicable to AWF as well. However, before the primary reviewer assigns their vote, they notify the secondary reviewer and ask for their input. Weâre also a bit less likely to reach out to interview the applicant.
What follows is voting by all fund managers. As outlined in another question by Marcus, we grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and weâve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to â5, with +5 being the strongest possible endorsement of positive impact, and â5 being a grant with an anti-endorsement thatâs actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grants that are around the threshold of 2.5 unless anyone wanted to actively make the case for or against something outside of these bounds (the size and scope of other grants, particularly the large grants we approve, is also discussed).
Similarly to LTFF,
however, we only provide feedback if asked by a grantee.
We donât have any immediate plans to write a longer post about the process outside of this AMA. However, we are generally planning to increase communication of the fundâs approach, so that is something we could potentially draft in the future unless other higher priority write-up will take precedence.
Thanks!
It seems to me like you might now be like 80% of the way to a write-up like the LTFFâs one with this comment of yours, haha. Maybe itâd be easy enough to just lightly edit that into a google doc framed as âwhat AWF doesâ rather than âhow AWF differs from LTFFâ, and then link to that from the fundâs page or future posts?
(I donât really have a stake in thisâjust sharing a thought that occurred to me.)