It’s not binary, though. Think of the intermediate micro utility maximization problem: you allocate your budget across goods until marginal utility per dollar is equalized. With diminishing marginal utility, you generally will spread your budget across multiple goods.
Similarly, we should expect to allocate the EA budget across a portfolio of causes. Yes, it’s possible that one cause has the highest MU/$, and that diminishing returns won’t affect anything in the range of our budget (ie, after spending our entire budget on that cause, it still has the highest MU/$), but I see no reason to assume this is the default case.
The reason to make that assumption is that EA is just a very small component of the global budget and we are typically dealing with large problems, so our funding usually does little to change marginal returns.
In some cases, like AI risk, the problem is “small” (i.e. our small amount of extra funding can meet the main practical requirements for the time being). However, for big economic issues, that doesn’t seem to be the case.
We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there’s a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.
I think that kind of spikiness (1000, 200, 100 with big gaps between) isn’t the norm. Often one can proceed to weaker and indirect versions of a top intervention (funding scholarships to expand the talent pipelines for said think-tanks, buying them more Google Ads to publicize their research) with lower marginal utility that smooth out the returns curve, as you do progressively less appealing and more ancillary versions of the 1000-intervention until they start to get down into the 200-intervention range.
It’s not binary, though. Think of the intermediate micro utility maximization problem: you allocate your budget across goods until marginal utility per dollar is equalized. With diminishing marginal utility, you generally will spread your budget across multiple goods.
Similarly, we should expect to allocate the EA budget across a portfolio of causes. Yes, it’s possible that one cause has the highest MU/$, and that diminishing returns won’t affect anything in the range of our budget (ie, after spending our entire budget on that cause, it still has the highest MU/$), but I see no reason to assume this is the default case.
More here.
The reason to make that assumption is that EA is just a very small component of the global budget and we are typically dealing with large problems, so our funding usually does little to change marginal returns.
In some cases, like AI risk, the problem is “small” (i.e. our small amount of extra funding can meet the main practical requirements for the time being). However, for big economic issues, that doesn’t seem to be the case.
We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there’s a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.
I think that kind of spikiness (1000, 200, 100 with big gaps between) isn’t the norm. Often one can proceed to weaker and indirect versions of a top intervention (funding scholarships to expand the talent pipelines for said think-tanks, buying them more Google Ads to publicize their research) with lower marginal utility that smooth out the returns curve, as you do progressively less appealing and more ancillary versions of the 1000-intervention until they start to get down into the 200-intervention range.
Do you think that affects the conclusion about diminishing returns?
Yup, agreed.