I think ultimately funding diversification feels like a really useful signal that your project looks good by a somewhat independent assessment from multiple funders. Particularly if you are working in a space without a charity evaluator or clear feedback loops, I think you should not aim for the maximum amount of money you can get, but instead the sweet spot of where you produce the most impact per dollar, at the highest scale that is well above the counterfactual bar as would be accessed by independent but informed people (e.g., four separate funders).
I think there may be a useful broader principle here, but it may imply a sweet spot closer to the ~median counterfactual funder bar rather than one “well above” multiple independent informed evaluator bars.[1]
In my model, funders estimate the impact of each charity, and set a funding bar based on the results of their impact assessments across all candidate charities. Each funder’s individual estimate for any charity has an measurement error term. However, a funder’s estimation errors are also reflected in its funding bar (because the bar is built on the funder’s estimates). Thus, if the funder is 50% too optimistic on average, its funding bar will be likewise inflated 50%. In other words, there’s no reason to assume by default that the funder is bullish or bearish on a randomly-selected charity. In most cases, funders should be at least roughly as likely to be right about the charity in question as the charity to whom they would counterfactually donate.
If all the funders are reasonably competent, the best measure of a charity’s cost-effectiveness is likely to be some sort of median/mean/other measure of central tendency of funder evaluations. In this model, seeking to be “above the bar of what [the charity’s] most permissive funder would accept” makes a lot of sense. There’s a high risk that the charity’s most permissive funder is permissive because of a measurement error (vs. because all the other funders measured wrong).
On the other hand, if most of the charity’s funders would fund at a higher spending level, that is at least some evidence that the marginal funding would be cost-effective. Charities likely have better information on their impact, including the marginal impact from additional funding, than funders possess. But the funders probably have a better sense of the counterfactual value of funding.[2] As the number of funders who think additional marginal funding for the charity would be above-bar increases, the odds that the charity is overstating the effectiveness of other charities relative to itself should increase.
In the end, given certain assumptions about funders, a guideline of “stop growing when a ~majority of funders would not find your growth cost-effective relative to the bar” may minimize the risk of various types of errors here. That strikes me as a moderately pro-growth perspective in comparison to a goal of remaining “well above the counterfactual bar.”
This is likely to vary by cause area—for example, the counterfactual value of providing additional monies to GiveWell top charities is relatively easier to understand.
I think the suggestion here makes sense, although I likely have a more pessimistic model of funder (and charity, for that matter) rationality. E.g., I expect a charismatic but equally talented charity founder to have ~2x the fundraising ability, even in EA. This creates a bit more noise in the system and makes me inclined to set higher bars to compensate for it.
I think there may be a useful broader principle here, but it may imply a sweet spot closer to the ~median counterfactual funder bar rather than one “well above” multiple independent informed evaluator bars.[1]
In my model, funders estimate the impact of each charity, and set a funding bar based on the results of their impact assessments across all candidate charities. Each funder’s individual estimate for any charity has an measurement error term. However, a funder’s estimation errors are also reflected in its funding bar (because the bar is built on the funder’s estimates). Thus, if the funder is 50% too optimistic on average, its funding bar will be likewise inflated 50%. In other words, there’s no reason to assume by default that the funder is bullish or bearish on a randomly-selected charity. In most cases, funders should be at least roughly as likely to be right about the charity in question as the charity to whom they would counterfactually donate.
If all the funders are reasonably competent, the best measure of a charity’s cost-effectiveness is likely to be some sort of median/mean/other measure of central tendency of funder evaluations. In this model, seeking to be “above the bar of what [the charity’s] most permissive funder would accept” makes a lot of sense. There’s a high risk that the charity’s most permissive funder is permissive because of a measurement error (vs. because all the other funders measured wrong).
On the other hand, if most of the charity’s funders would fund at a higher spending level, that is at least some evidence that the marginal funding would be cost-effective. Charities likely have better information on their impact, including the marginal impact from additional funding, than funders possess. But the funders probably have a better sense of the counterfactual value of funding.[2] As the number of funders who think additional marginal funding for the charity would be above-bar increases, the odds that the charity is overstating the effectiveness of other charities relative to itself should increase.
In the end, given certain assumptions about funders, a guideline of “stop growing when a ~majority of funders would not find your growth cost-effective relative to the bar” may minimize the risk of various types of errors here. That strikes me as a moderately pro-growth perspective in comparison to a goal of remaining “well above the counterfactual bar.”
This analysis is based on an assumption that funders are pretty good at what they do.
This is likely to vary by cause area—for example, the counterfactual value of providing additional monies to GiveWell top charities is relatively easier to understand.
I think the suggestion here makes sense, although I likely have a more pessimistic model of funder (and charity, for that matter) rationality. E.g., I expect a charismatic but equally talented charity founder to have ~2x the fundraising ability, even in EA. This creates a bit more noise in the system and makes me inclined to set higher bars to compensate for it.