grantmakers fall back on prestige because they don’t always have the resources to properly evaluate ideas
It seems like this recent post describes the opposite pattern, of someone with a highly prestigious resume spending a lot of resources getting evaluated, and getting rejected despite their resume. I wonder why the pattern would be different between hiring and grantmaking?
Anyway, one idea for helping address the bottleneck is to maintain a shared open-source grantmaking algorithm. The algorithm could include forecasting best practices, a list of ways projects can cause harm, etc. Every time a project fails despite our hopes, or succeeds despite our concerns, we could update the algorithm with our learnings. It could be shared between established EA grantmakers, donor lottery winners, independent angels, etc.
I don’t think such an algorithm would eliminate the need for domain expertise. But it might make it less of a bottleneck. The ideal audience might be an EA who is EtG and thinking of donating to a friend’s project. They can vouch for their friend and they have a limited amount of domain expertise in the area of their friend’s project. They could do some fraction of the algorithm on their own, then maybe step 7 would be: “Find a domain expert in the EA community. Have them glance over everything you’ve done so far to evaluate this project and let you know what you’re missing.” (Arguably the biggest weakness of amateurs relative to experts is amateurs don’t know what they don’t know. Plausibly it’s also valuable to involve at least one person who is not friends with the project leader to fight social desirability bias etc. Another way to help address the unknown unknowns problem is making a post to this forum and paying for critical feedback. OpenPhil has a relevant essay re: what they aim for in their writeups.)
It seems like this recent post describes the opposite pattern, of someone with a highly prestigious resume spending a lot of resources getting evaluated, and getting rejected despite their resume. I wonder why the pattern would be different between hiring and grantmaking?
Anyway, one idea for helping address the bottleneck is to maintain a shared open-source grantmaking algorithm. The algorithm could include forecasting best practices, a list of ways projects can cause harm, etc. Every time a project fails despite our hopes, or succeeds despite our concerns, we could update the algorithm with our learnings. It could be shared between established EA grantmakers, donor lottery winners, independent angels, etc.
I don’t think such an algorithm would eliminate the need for domain expertise. But it might make it less of a bottleneck. The ideal audience might be an EA who is EtG and thinking of donating to a friend’s project. They can vouch for their friend and they have a limited amount of domain expertise in the area of their friend’s project. They could do some fraction of the algorithm on their own, then maybe step 7 would be: “Find a domain expert in the EA community. Have them glance over everything you’ve done so far to evaluate this project and let you know what you’re missing.” (Arguably the biggest weakness of amateurs relative to experts is amateurs don’t know what they don’t know. Plausibly it’s also valuable to involve at least one person who is not friends with the project leader to fight social desirability bias etc. Another way to help address the unknown unknowns problem is making a post to this forum and paying for critical feedback. OpenPhil has a relevant essay re: what they aim for in their writeups.)