We recently transferred a lot of the ‘best practices’ that each fund (especially the LTFF) discovered to all the other funds, and as a result, I think it’s very similar and there are at most minor differences at this point.
Having an application form that asks some more detailed questions (e.g., path to impact of the project, CV/resume, names of the people involved with the organization applying, confidential information)
Having a primary investigator for each grant (who gets additional input from 1-3 other fund managers), rather than having everyone review all grants
Using score voting with a threshold (rather than ordering grants by expected impact, then spending however much money we have)
Explicitly considering giving applicants more money than they applied for
Offering feedback to applicants under certain conditions (if we feel like we have particularly useful thoughts to share with them, or they received an unusually high score in our internal voting)
Asking for references in the first stage of the application form, but without requiring applicants to clear them ahead of time (so it’s low-effort for them, but we already know who the references would be)
Having an automatically generated google doc for each application that contains all the information related to a particular grant (original application, evaluation, internal discussion, references, applicant emails, etc.)
Writing in-depth payout reports to build trust and help improve community epistemics; write shorter, lower-effort payout reports once that’s done and we want to save time
We recently transferred a lot of the ‘best practices’ that each fund (especially the LTFF) discovered to all the other funds, and as a result, I think it’s very similar and there are at most minor differences at this point.
What were the most important practices you transferred?
Having an application form that asks some more detailed questions (e.g., path to impact of the project, CV/resume, names of the people involved with the organization applying, confidential information)
Having a primary investigator for each grant (who gets additional input from 1-3 other fund managers), rather than having everyone review all grants
Using score voting with a threshold (rather than ordering grants by expected impact, then spending however much money we have)
Explicitly considering giving applicants more money than they applied for
Offering feedback to applicants under certain conditions (if we feel like we have particularly useful thoughts to share with them, or they received an unusually high score in our internal voting)
Asking for references in the first stage of the application form, but without requiring applicants to clear them ahead of time (so it’s low-effort for them, but we already know who the references would be)
Having an automatically generated google doc for each application that contains all the information related to a particular grant (original application, evaluation, internal discussion, references, applicant emails, etc.)
Writing in-depth payout reports to build trust and help improve community epistemics; write shorter, lower-effort payout reports once that’s done and we want to save time