I think that labor overhang is more acceptable than funding overhang because 1) if people are interested in working on something, they may be able to find funding in their immediate circles or non-EA grantmaking institutions. The latter normalizes EA-related focus among these institutions. Further, 2) some cost-effective work relates to influential position decisionmaking (commercially paid). It would not make sense to pay the influencer for the ’10 minutes per week to pass the memo.′
Excess demand (funding overhang) should not exist. I do not believe that EA-related project time funding is uncompetitive or that talent, advisory capacity, or project ideas are in relative scarcity to funding. It may be the grantmakers’ inertia to fund projects that absorb large amounts of funding but do not solve problems. Or, the elephant can be strategical considerations: for example, how to highlight AI safety narratives that prevent rejection and gain acceptance of big tech companies while suggesting cost-effective focus. It can be believed that few individuals/organizations should be highlighted at this point. Problem solving, such as developing a debiasing algorithm, should be postponed after sufficient share of companies’ buy in. This would cause the funding overhang.
Thus, is funding overhang actually risky if there is no chest? For example, if someone has $100,000 to donate to a group of their friends developing some special software for an authoritarian government to help it open up. Or, is the no-chest alternative better: no sensible person would donate to such risky project but individuals are allowed to make (and amend) more mistakes than grantmaking organizations more officially associated with EA (plus, the dynamics are different—a favor for a friends vs. following ‘another institution’s doctrine’).
Overall, funding overhang is nothing to worry about but labor overhang would be fine too.
I think that labor overhang is more acceptable than funding overhang because 1) if people are interested in working on something, they may be able to find funding in their immediate circles or non-EA grantmaking institutions. The latter normalizes EA-related focus among these institutions. Further, 2) some cost-effective work relates to influential position decisionmaking (commercially paid). It would not make sense to pay the influencer for the ’10 minutes per week to pass the memo.′
Excess demand (funding overhang) should not exist. I do not believe that EA-related project time funding is uncompetitive or that talent, advisory capacity, or project ideas are in relative scarcity to funding. It may be the grantmakers’ inertia to fund projects that absorb large amounts of funding but do not solve problems. Or, the elephant can be strategical considerations: for example, how to highlight AI safety narratives that prevent rejection and gain acceptance of big tech companies while suggesting cost-effective focus. It can be believed that few individuals/organizations should be highlighted at this point. Problem solving, such as developing a debiasing algorithm, should be postponed after sufficient share of companies’ buy in. This would cause the funding overhang.
Thus, is funding overhang actually risky if there is no chest? For example, if someone has $100,000 to donate to a group of their friends developing some special software for an authoritarian government to help it open up. Or, is the no-chest alternative better: no sensible person would donate to such risky project but individuals are allowed to make (and amend) more mistakes than grantmaking organizations more officially associated with EA (plus, the dynamics are different—a favor for a friends vs. following ‘another institution’s doctrine’).
Overall, funding overhang is nothing to worry about but labor overhang would be fine too.