10x increase in labor leading to 3x increase in impact feels surprising to me. At least in the regime of ~2xing supply I doubt returns diminish that quickly. But I haven’t thought about this deeply and I agree that there is some rate of diminishing marginal returns which would make marginal growth net negative.
The response to Michael is an interesting point, but it only concerns diminishing returns in individual capabilities of new members.
Diminishing returns are mainly driven by the quality of opportunities being used up, rather than the capabilities.
IIRC a 10x in resources to get a 3x in impact was a typical response in the old coordination forum survey responses.
In the past at 80k I’d often assume a 3x increase in inputs (e.g. advising calls) to get a 2x increase in outputs (impact-adjusted plan changes), and that seemed to be roughly consistent with the data (though the data don’t tell us that much). In some cases, returns seem to diminish a lot faster than that. And you often face diminishing returns at several levels (e.g. 3x as much marketing to get 2x as many applicants to advising).
I agree returns are more linear in areas where EA resources are a small fraction of the total, like global health, but that’s not the case in areas like AI safety, GCBRs, new causes like digital sentience, or promoting EA.
And even in global health, if GiveWell only had $100m to allocate, average cost-effectiveness would be a lot higher (maybe 3-10x higher?) than where the marginal dollar goes today. If GiveWell had to allocate $10bn, I’d guess returns would be at least several fold lower again on the marginal spending.
Thanks! See my response to Michael for some thoughts on diminishing returns.
10x increase in labor leading to 3x increase in impact feels surprising to me. At least in the regime of ~2xing supply I doubt returns diminish that quickly. But I haven’t thought about this deeply and I agree that there is some rate of diminishing marginal returns which would make marginal growth net negative.
The response to Michael is an interesting point, but it only concerns diminishing returns in individual capabilities of new members.
Diminishing returns are mainly driven by the quality of opportunities being used up, rather than the capabilities.
IIRC a 10x in resources to get a 3x in impact was a typical response in the old coordination forum survey responses.
In the past at 80k I’d often assume a 3x increase in inputs (e.g. advising calls) to get a 2x increase in outputs (impact-adjusted plan changes), and that seemed to be roughly consistent with the data (though the data don’t tell us that much). In some cases, returns seem to diminish a lot faster than that. And you often face diminishing returns at several levels (e.g. 3x as much marketing to get 2x as many applicants to advising).
I agree returns are more linear in areas where EA resources are a small fraction of the total, like global health, but that’s not the case in areas like AI safety, GCBRs, new causes like digital sentience, or promoting EA.
And even in global health, if GiveWell only had $100m to allocate, average cost-effectiveness would be a lot higher (maybe 3-10x higher?) than where the marginal dollar goes today. If GiveWell had to allocate $10bn, I’d guess returns would be at least several fold lower again on the marginal spending.