This post is helpful and appropriately cautious! Thanks Linch.
It feels like adverse selection is a common enough phenomenon that there must be helpful case studies to learn from. I explored this with GPT, and got the following solutions for philanthropic grantmaking:
Third-party assessments: An independent body can evaluate projects or grantees and provide a certification.
Open feedback mechanisms: Existing and past donors can leave reviews or feedback on projects, helping to inform potential future donors.
Tiered grants: Offer different levels of funding based on the risk or novelty of the project. Riskier projects might get smaller, initial amounts with the possibility of more significant funding later if they show promise.
Pilot funding rounds: Similar to probationary periods, fund a project for a short time or with limited funds to assess its viability before committing more.
Collaborative funding: Multiple grantmakers can come together to fund a project, thereby sharing the risk.
Transparency in rejections: While specific details might remain confidential, grantmakers can provide general reasons for rejection, helping to guide potential donors.
Mentorship or guidance: Instead of just providing funds, offer mentorship or guidance to projects, helping them to develop in areas where they might be lacking.
I’m pleased with (2) -- I’ve been putting time into open feedback on Manifund. And (5) is suggestive of something helpful: when it is ok for projects to receive only partial funding and each project applies to the same set of funders, then funders funding only “their part” reduces possible damage without the need to share private information. (Not putting in their part might be helpful information itself.)
Otherwise, these suggestions seem obvious or unhelpful. But I expect that a couple-of-hours dive into how philanthropists or science funders have dealt with these dynamics would be better. Nice project for someone! (@alex lawsen (previously alexrjl)?)
I don’t quite understand how Mentorship or guidance, Pilot funding rounds or Tiered grants would help with adverse selection effects. Could you expand?
I wrote that most of GPT-4’s suggestions were “obvious or unhelpful.” I would include the ones you pointed to in this box. Pilot funding and tiered grants are presumably things you already do implicitly—e.g. by not committing funding for multiple years in one go, or by not giving huge resources to grantees that you don’t think highly of—and where you wouldn’t benefit much from making this more explicit. And mentorship or guidance seems unhelpful because it’s much too time-costly.
I’m guessing that GPT-4 is trying to point to ‘ways to lower the information asymmetry’ characteristic of adverse selection. All three of these methods give money-cheap ways of gaining more information before making money-expensive decisions.
This post is helpful and appropriately cautious! Thanks Linch.
It feels like adverse selection is a common enough phenomenon that there must be helpful case studies to learn from. I explored this with GPT, and got the following solutions for philanthropic grantmaking:
I’m pleased with (2) -- I’ve been putting time into open feedback on Manifund. And (5) is suggestive of something helpful: when it is ok for projects to receive only partial funding and each project applies to the same set of funders, then funders funding only “their part” reduces possible damage without the need to share private information. (Not putting in their part might be helpful information itself.)
Otherwise, these suggestions seem obvious or unhelpful. But I expect that a couple-of-hours dive into how philanthropists or science funders have dealt with these dynamics would be better. Nice project for someone! (@alex lawsen (previously alexrjl)?)
Thanks for the suggestions :)
I don’t quite understand how Mentorship or guidance, Pilot funding rounds or Tiered grants would help with adverse selection effects. Could you expand?
No problem, thanks for engaging!
I wrote that most of GPT-4’s suggestions were “obvious or unhelpful.” I would include the ones you pointed to in this box. Pilot funding and tiered grants are presumably things you already do implicitly—e.g. by not committing funding for multiple years in one go, or by not giving huge resources to grantees that you don’t think highly of—and where you wouldn’t benefit much from making this more explicit. And mentorship or guidance seems unhelpful because it’s much too time-costly.
I’m guessing that GPT-4 is trying to point to ‘ways to lower the information asymmetry’ characteristic of adverse selection. All three of these methods give money-cheap ways of gaining more information before making money-expensive decisions.