I think one reason this area may get less attention in EA is that if you’re willing to sign up for high-risk high-return scenarios that are more theory-driven and less retrospective-data-driven (like economic growth), you’re also more sympathetic to long-termist areas like x-risk. And once you’re comparing x-risk to economic growth, there’s no guarantee that growth wins.
In other words, I think economic growth may be competing against x-risk—not RCTs—among EAs.
(Though certain ethical views may argue against long-termist interventions like x-risk reduction. A focus on economic growth may be the best fit for people that are “epistemically permissive” but “ethically conservative”, if that makes sense.)
Aside from risk aversion, in the appendix, I list some more cognitive biases that might be at play for why people prefer RCTs.
Relatedly, perhaps people sympathetic to long-termism might believe that speeding up growth might speed up GCRs from emerging technologies. And while it is unclear when growth will speed up x-risk at all (see for instance), I think that when it comes to differential technological development, not all growth is equal.
What speeds up risks from emerging technologies is mostly growth in highly technical sectors in high-income countries. Growth in low-income countries will not increase world growth much and is less likely to cause risks from emerging technologies.
Put simply: Burundi’s catch-up growth won’t speed up global growth by much, is unlikely to speed up risks from AI or bio any time soon. Growth has been argued to lead to “Greater opportunity, tolerance of diversity, social mobility, commitment to fairness, and dedication to democracy.” Perhaps growth in poor countries will actually increase stability and thus be good from a differential technological development point.
But even if growth in poor countries will slightly increase x-risks, then it might still be optimal to support it and offset the x-risk increase through targeted interventions to decrease x-risks. This is because multiobjective optimization for both x-risk reduction and global poverty is likely harder than single objective optimization for the most effective interventions in each category separately.
Rather than being wild speculation, I think this is clearly correct. And needs to be mentioned anytime someone criticizes EA for having too much focus on proven interventions instead of things like economic growth.
However there are other causes which can be good under such a moderate epistemic view: growing Effective Altruism, curing aging, fighting climate change, partisan politics, improving foreign policy, etc. All of these have been recognized by some Effective Altruists as important and will compete with economic growth for attention.
This is speculative, but I suspect many of the things you mentioned fall in the category of things that seem pretty impactful, potentially on par with EA’s main cause areas (poverty, animals, x-risk), but it doesn’t seem like it makes sense to devote that much EA manpower or resources to it right now—so a small number of EAs who identify one such area can work on it, and it’s great, (and the EA movement should encourage that, with sufficient justification of the impact), but I can see why the EA movement doesn’t put them as a main cause.
(I don’t necessarily agree with all of the ideas you mentioned as belonging to theses categories, and I probably don’t know enough about them to do so, though I can see many of them being such an area.)
A digression, but I do wonder if people working on these smaller, niche areas with an EA spirit, (assuming they did make the right call on the impact and it’s just an area that can’t absorb a lot of EA resources) feel sidelined or dismissed by the EA movement. (Might be the case for climate for instance.) And I wonder if this were really the case how the EA movement can be better at encouraging such independent thinking and work.
A digression, but I do wonder if people working on these smaller, niche areas with an EA spirit, (assuming they did make the right call on the impact and it’s just an area that can’t absorb a lot of EA resources) feel sidelined or dismissed by the EA movement. (Might be the case for climate for instance.) And I wonder if this were really the case how the EA movement can be better at encouraging such independent thinking and work.
The answer is simply to grow the EA movement so that more causes have adequate numbers of people working on them. Rather worrying about giving people equal slices of the pie.
It’s not binary, though. Think of the intermediate micro utility maximization problem: you allocate your budget across goods until marginal utility per dollar is equalized. With diminishing marginal utility, you generally will spread your budget across multiple goods.
Similarly, we should expect to allocate the EA budget across a portfolio of causes. Yes, it’s possible that one cause has the highest MU/$, and that diminishing returns won’t affect anything in the range of our budget (ie, after spending our entire budget on that cause, it still has the highest MU/$), but I see no reason to assume this is the default case.
The reason to make that assumption is that EA is just a very small component of the global budget and we are typically dealing with large problems, so our funding usually does little to change marginal returns.
In some cases, like AI risk, the problem is “small” (i.e. our small amount of extra funding can meet the main practical requirements for the time being). However, for big economic issues, that doesn’t seem to be the case.
We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there’s a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.
I think that kind of spikiness (1000, 200, 100 with big gaps between) isn’t the norm. Often one can proceed to weaker and indirect versions of a top intervention (funding scholarships to expand the talent pipelines for said think-tanks, buying them more Google Ads to publicize their research) with lower marginal utility that smooth out the returns curve, as you do progressively less appealing and more ancillary versions of the 1000-intervention until they start to get down into the 200-intervention range.
That’s very plausible. So, if someone wants EA to focus on growth, they should use different strategies to convince x-riskers that it’s better for the long-term (ex: “read Tyler Cowen”) or welfare/equality EAs that it’s better for low-income people (“read… Tyler Cowen?”).
Wild speculation:
I think one reason this area may get less attention in EA is that if you’re willing to sign up for high-risk high-return scenarios that are more theory-driven and less retrospective-data-driven (like economic growth), you’re also more sympathetic to long-termist areas like x-risk. And once you’re comparing x-risk to economic growth, there’s no guarantee that growth wins.
In other words, I think economic growth may be competing against x-risk—not RCTs—among EAs.
(Though certain ethical views may argue against long-termist interventions like x-risk reduction. A focus on economic growth may be the best fit for people that are “epistemically permissive” but “ethically conservative”, if that makes sense.)
Yes, interesting take.
Aside from risk aversion, in the appendix, I list some more cognitive biases that might be at play for why people prefer RCTs.
Relatedly, perhaps people sympathetic to long-termism might believe that speeding up growth might speed up GCRs from emerging technologies. And while it is unclear when growth will speed up x-risk at all (see for instance), I think that when it comes to differential technological development, not all growth is equal.
What speeds up risks from emerging technologies is mostly growth in highly technical sectors in high-income countries. Growth in low-income countries will not increase world growth much and is less likely to cause risks from emerging technologies.
Put simply: Burundi’s catch-up growth won’t speed up global growth by much, is unlikely to speed up risks from AI or bio any time soon. Growth has been argued to lead to “Greater opportunity, tolerance of diversity, social mobility, commitment to fairness, and dedication to democracy.” Perhaps growth in poor countries will actually increase stability and thus be good from a differential technological development point.
Lower skilled labor also competes with AI R&D and so increasing trade and migration decrease AI R&D (see “Why Are [Silicon Valley] Geniuses Destroying Jobs in Uganda?”.
But even if growth in poor countries will slightly increase x-risks, then it might still be optimal to support it and offset the x-risk increase through targeted interventions to decrease x-risks. This is because multiobjective optimization for both x-risk reduction and global poverty is likely harder than single objective optimization for the most effective interventions in each category separately.
If lower-skilled labor reduces AI R&D and therefore slows the pace of AI development, wouldn’t it also reduce the risk of x-risks from AI?
Rather than being wild speculation, I think this is clearly correct. And needs to be mentioned anytime someone criticizes EA for having too much focus on proven interventions instead of things like economic growth.
However there are other causes which can be good under such a moderate epistemic view: growing Effective Altruism, curing aging, fighting climate change, partisan politics, improving foreign policy, etc. All of these have been recognized by some Effective Altruists as important and will compete with economic growth for attention.
This is speculative, but I suspect many of the things you mentioned fall in the category of things that seem pretty impactful, potentially on par with EA’s main cause areas (poverty, animals, x-risk), but it doesn’t seem like it makes sense to devote that much EA manpower or resources to it right now—so a small number of EAs who identify one such area can work on it, and it’s great, (and the EA movement should encourage that, with sufficient justification of the impact), but I can see why the EA movement doesn’t put them as a main cause.
(I don’t necessarily agree with all of the ideas you mentioned as belonging to theses categories, and I probably don’t know enough about them to do so, though I can see many of them being such an area.)
A digression, but I do wonder if people working on these smaller, niche areas with an EA spirit, (assuming they did make the right call on the impact and it’s just an area that can’t absorb a lot of EA resources) feel sidelined or dismissed by the EA movement. (Might be the case for climate for instance.) And I wonder if this were really the case how the EA movement can be better at encouraging such independent thinking and work.
The answer is simply to grow the EA movement so that more causes have adequate numbers of people working on them. Rather worrying about giving people equal slices of the pie.
Would you say that, almost 4 years later, we’ve made progress on that front?
It’s not binary, though. Think of the intermediate micro utility maximization problem: you allocate your budget across goods until marginal utility per dollar is equalized. With diminishing marginal utility, you generally will spread your budget across multiple goods.
Similarly, we should expect to allocate the EA budget across a portfolio of causes. Yes, it’s possible that one cause has the highest MU/$, and that diminishing returns won’t affect anything in the range of our budget (ie, after spending our entire budget on that cause, it still has the highest MU/$), but I see no reason to assume this is the default case.
More here.
The reason to make that assumption is that EA is just a very small component of the global budget and we are typically dealing with large problems, so our funding usually does little to change marginal returns.
In some cases, like AI risk, the problem is “small” (i.e. our small amount of extra funding can meet the main practical requirements for the time being). However, for big economic issues, that doesn’t seem to be the case.
We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there’s a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.
I think that kind of spikiness (1000, 200, 100 with big gaps between) isn’t the norm. Often one can proceed to weaker and indirect versions of a top intervention (funding scholarships to expand the talent pipelines for said think-tanks, buying them more Google Ads to publicize their research) with lower marginal utility that smooth out the returns curve, as you do progressively less appealing and more ancillary versions of the 1000-intervention until they start to get down into the 200-intervention range.
Do you think that affects the conclusion about diminishing returns?
Yup, agreed.
That’s very plausible. So, if someone wants EA to focus on growth, they should use different strategies to convince x-riskers that it’s better for the long-term (ex: “read Tyler Cowen”) or welfare/equality EAs that it’s better for low-income people (“read… Tyler Cowen?”).