I also sympathize with your confusion. - FWIW, I think that a fair amount of uncertainty and confusion about the issues you’ve raised here is the epistemically adequate state to be in. (I’m less sure whether we can reliably reduce our uncertainty and confusion through more ‘research’.) I tentatively think that the “received longtermist EA wisdom” is broadly correct—i.e. roughly that the most good we can do usually (for most people in most situations)is by reducing specific existential risks (AI, bio, …) -, but I think that
(i) this is not at all obvious or settled, and involves judgment calls on my part which I could only partly make explicit and justify; and
(ii) the optimal allocation of ‘longtermist talent’ will have some fraction of people examining whether this “received wisdom” is actually correct, and will also have some distribution across existential risk reduction, what you call growth interventions, and other plausible interventions aimed at improving the long-term future (e.g. “moral circle expansion”) - for basically the “switching cost” and related reasons you mention [ETA: see also sc. 2.4 of GPI’s research agenda].
One thing in your post I might want to question is that, outside of your more abstract discussion, you phrase the question as whether, e.g., “AI safety should be virtually infinitely preferred to other cause areas such as global poverty”. I’m worried that this is somewhat misleading because I think most of your discussion rather concerns the question whether, to improve the long-term future, it’s more valuable to (a) speed up growth or to (b) reduce the risk of growth stopping. I think AI safety is a good example of a type-(b) intervention, but that most global poverty interventions likely aren’t a good example of a type-(a) intervention. This is because I would find it surprising if an intervention that has been selected to maximize some measure of short-term impact also turned out to be optimal for speeding up growth in the long-run. (Of course, this is a defeatable consideration, and I acknowledge that there might be economic arguments that suggest that accelerating growth in currently poor countries might be particularly promising to increase overall growth.) In other words, I think that the optimal “growth intervention” Alice would want to consider probably isn’t, say, donating to distribute bednets; I don’t have a considered view on what it would be instead, but I think it might be something like: doing research in a particularly dynamic field that might drive technological advances; or advocating changes in R&D or macroeconomic policy. (For some related back-of-the-envelope calculations, see Paul Christiano’s post on What is the return to giving?; they suggest “that good traditional philanthropic opportunities have a return of around 10 and the best available opportunities probably have returns of 100-1000, with most of the heavy hitters being research projects that contribute to long term tech progress and possibly political advocacy”, but of course there is a lot of room for error here. See also this post for how maximally increasing technological progress might look like.)
Lastly, here are some resources on the “increase growth vs. reduce risk” question, which you might be interested in if you haven’t seen them:
Paul Christiano’s post on (literal) Astronomical waste, where he considers the permanent loss of value from delayed growth due to cosmological processes (expansion, stars burning down, …). In particular, he also mentions the possibility that “there is a small probability that the goodness of the future scales exponentially with the available resources”, though he ultimately says he favors roughly what you called the plateau view.
In an 80,000 Hours podcast, economist Tyler Cowen argues that “our overwhelming priorities should be maximising economic growth and making civilization more stable”.
For considerations about how to deal with uncertainty over how much utility will grow as a function of resources, see GPI’s research agenda, in particular the last bullet point of section 1.4. (This one deals with the possibility of infinite utilities, which raises somewhat similar meta-normative issues. I thought I remembered that they also discuss the literal point you raised—i.e. what if utility will in the long-run grow exponentially? -, but wasn’t able to find it.)
I might follow up in additional comments with some pointers to issues related to the one you discuss in the OP.
I have two comments concerning your arguments against accelerating growth in poor countries. One is more “inside view”, the other is more “outside view”.
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
The “outside view” point is that I find our epistemology really shaky and worrisome. Let me transpose the question into AI safety to illustrate that the point is not related to growth interventions. If I want to make progress on AI safety, maybe I can try directly to “solve AI alignment”. Let’s say that I hesitate between this and trying to improve the reliability of current-day AI algorithms. I feel that, at least in casual conversations (perhaps especially from people who are not actually working in the area), people would be all too willing to jump to “of course the first option is much better because this is the real problem, if it succeeds we win”. But in truth there is a tradeoff with being able to make any progress at all, it is not just better to turn your attention to the most maximally long-term thing you can think of. And, I think it is extremely useful to have some feedback loop that allows you to track what you are doing, and by necessity this feedback loop will be somewhat short term. To summarize, I believe that there is a “sweet spot” where you choose to focus on things that seem to point in the right direction and also allow you at least some modicum of feedback over shorter time scales.
Now, consider the argument “this intervention cannot be optimal in the long run because it has been optimized for the short term”. This argument essentially allows you to reject any intervention that has shown great promise based on the observations we can gather. So, effective altruism started as being “evidence based” etc., and now we reached a situation where we have built a theoretical construct that, not only allows to place certain interventions above all others without us having to give any empirical evidence backing this, but moreover, if another intervention is proposed that comes with good empirical backing, we can use this fact as an argument against the intervention!
I may be pushing the argument a bit too far. This still makes me feel very uncomfortable.
Regarding your “outside view” point: I agree with what you say here, but think it cannot directly undermine my original “outside view” argument. These clarifications may explain why:
My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not “global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth”; instead it is “an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y”.
In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
In general, I think that selecting an intervention that’s optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics “just focus on the most important long-term outcome/risk” when selecting longtermist interventions, just as it would be bad to just rely on the heuristics “work on fighting whatever disease has the largest disease burden globally” when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don’t think that my original outside view argument commits me to the conclusion that in the domain of AI safety it’s best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn’t be sufficient). For example, in this case, such an argument could be of the type “our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries” (I’m not saying this claim is true, just that I would want to see something like this).
Ok, I understand your point better now, and find that it makes sense. To summarize, I believe that the art of good planning to a distant goal is to find a series of intermediate targets that we can focus on, one after the other. I was worried that your argument could be used against any such strategy. But in fact your point is that as it stands, health interventions have not been selected by a “planner” who was actually thinking about the long-term goals, so it is unlikely that the selected interventions are the best we can find. That sounds reasonable to me. I would really like to see more research into what optimizing for long-term growth could look like (and what kind of “intermediate targets” this would select). (There is some of this in Christiano’s post, but there is clearly room for more in-depth analysis in my opinion.)
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
I think this is a fair point. Specifically, I agree that GiveWell’s recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven’t looked at Paul Christiano’s post in sufficient detail to say for sure, but I agree it’s plausible that this way of using “price of a life saved” calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell’s analysis.
I would need to think about this more to form a considered view, but my guess is this wouldn’t change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren’t optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:
The “beware suspicious convergence” argument implies that it’s unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
You’ve pointed out a potential problem with the second bullet point. I think it’s plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don’t think they would by themselves succeed in defeating the first bullet point.
Thank you, I think this is an excellent post!
I also sympathize with your confusion. - FWIW, I think that a fair amount of uncertainty and confusion about the issues you’ve raised here is the epistemically adequate state to be in. (I’m less sure whether we can reliably reduce our uncertainty and confusion through more ‘research’.) I tentatively think that the “received longtermist EA wisdom” is broadly correct—i.e. roughly that the most good we can do usually (for most people in most situations) is by reducing specific existential risks (AI, bio, …) -, but I think that
(i) this is not at all obvious or settled, and involves judgment calls on my part which I could only partly make explicit and justify; and
(ii) the optimal allocation of ‘longtermist talent’ will have some fraction of people examining whether this “received wisdom” is actually correct, and will also have some distribution across existential risk reduction, what you call growth interventions, and other plausible interventions aimed at improving the long-term future (e.g. “moral circle expansion”) - for basically the “switching cost” and related reasons you mention [ETA: see also sc. 2.4 of GPI’s research agenda].
One thing in your post I might want to question is that, outside of your more abstract discussion, you phrase the question as whether, e.g., “AI safety should be virtually infinitely preferred to other cause areas such as global poverty”. I’m worried that this is somewhat misleading because I think most of your discussion rather concerns the question whether, to improve the long-term future, it’s more valuable to (a) speed up growth or to (b) reduce the risk of growth stopping. I think AI safety is a good example of a type-(b) intervention, but that most global poverty interventions likely aren’t a good example of a type-(a) intervention. This is because I would find it surprising if an intervention that has been selected to maximize some measure of short-term impact also turned out to be optimal for speeding up growth in the long-run. (Of course, this is a defeatable consideration, and I acknowledge that there might be economic arguments that suggest that accelerating growth in currently poor countries might be particularly promising to increase overall growth.) In other words, I think that the optimal “growth intervention” Alice would want to consider probably isn’t, say, donating to distribute bednets; I don’t have a considered view on what it would be instead, but I think it might be something like: doing research in a particularly dynamic field that might drive technological advances; or advocating changes in R&D or macroeconomic policy. (For some related back-of-the-envelope calculations, see Paul Christiano’s post on What is the return to giving?; they suggest “that good traditional philanthropic opportunities have a return of around 10 and the best available opportunities probably have returns of 100-1000, with most of the heavy hitters being research projects that contribute to long term tech progress and possibly political advocacy”, but of course there is a lot of room for error here. See also this post for how maximally increasing technological progress might look like.)
Lastly, here are some resources on the “increase growth vs. reduce risk” question, which you might be interested in if you haven’t seen them:
Paul Christiano’s post on (literal) Astronomical waste, where he considers the permanent loss of value from delayed growth due to cosmological processes (expansion, stars burning down, …). In particular, he also mentions the possibility that “there is a small probability that the goodness of the future scales exponentially with the available resources”, though he ultimately says he favors roughly what you called the plateau view.
In an 80,000 Hours podcast, economist Tyler Cowen argues that “our overwhelming priorities should be maximising economic growth and making civilization more stable”.
For considerations about how to deal with uncertainty over how much utility will grow as a function of resources, see GPI’s research agenda, in particular the last bullet point of section 1.4. (This one deals with the possibility of infinite utilities, which raises somewhat similar meta-normative issues. I thought I remembered that they also discuss the literal point you raised—i.e. what if utility will in the long-run grow exponentially? -, but wasn’t able to find it.)
I might follow up in additional comments with some pointers to issues related to the one you discuss in the OP.
I have two comments concerning your arguments against accelerating growth in poor countries. One is more “inside view”, the other is more “outside view”.
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
The “outside view” point is that I find our epistemology really shaky and worrisome. Let me transpose the question into AI safety to illustrate that the point is not related to growth interventions. If I want to make progress on AI safety, maybe I can try directly to “solve AI alignment”. Let’s say that I hesitate between this and trying to improve the reliability of current-day AI algorithms. I feel that, at least in casual conversations (perhaps especially from people who are not actually working in the area), people would be all too willing to jump to “of course the first option is much better because this is the real problem, if it succeeds we win”. But in truth there is a tradeoff with being able to make any progress at all, it is not just better to turn your attention to the most maximally long-term thing you can think of. And, I think it is extremely useful to have some feedback loop that allows you to track what you are doing, and by necessity this feedback loop will be somewhat short term. To summarize, I believe that there is a “sweet spot” where you choose to focus on things that seem to point in the right direction and also allow you at least some modicum of feedback over shorter time scales.
Now, consider the argument “this intervention cannot be optimal in the long run because it has been optimized for the short term”. This argument essentially allows you to reject any intervention that has shown great promise based on the observations we can gather. So, effective altruism started as being “evidence based” etc., and now we reached a situation where we have built a theoretical construct that, not only allows to place certain interventions above all others without us having to give any empirical evidence backing this, but moreover, if another intervention is proposed that comes with good empirical backing, we can use this fact as an argument against the intervention!
I may be pushing the argument a bit too far. This still makes me feel very uncomfortable.
Regarding your “outside view” point: I agree with what you say here, but think it cannot directly undermine my original “outside view” argument. These clarifications may explain why:
My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not “global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth”; instead it is “an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y”.
In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
In general, I think that selecting an intervention that’s optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics “just focus on the most important long-term outcome/risk” when selecting longtermist interventions, just as it would be bad to just rely on the heuristics “work on fighting whatever disease has the largest disease burden globally” when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don’t think that my original outside view argument commits me to the conclusion that in the domain of AI safety it’s best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn’t be sufficient). For example, in this case, such an argument could be of the type “our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries” (I’m not saying this claim is true, just that I would want to see something like this).
Ok, I understand your point better now, and find that it makes sense. To summarize, I believe that the art of good planning to a distant goal is to find a series of intermediate targets that we can focus on, one after the other. I was worried that your argument could be used against any such strategy. But in fact your point is that as it stands, health interventions have not been selected by a “planner” who was actually thinking about the long-term goals, so it is unlikely that the selected interventions are the best we can find. That sounds reasonable to me. I would really like to see more research into what optimizing for long-term growth could look like (and what kind of “intermediate targets” this would select). (There is some of this in Christiano’s post, but there is clearly room for more in-depth analysis in my opinion.)
.
I think this is a fair point. Specifically, I agree that GiveWell’s recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven’t looked at Paul Christiano’s post in sufficient detail to say for sure, but I agree it’s plausible that this way of using “price of a life saved” calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell’s analysis.
I would need to think about this more to form a considered view, but my guess is this wouldn’t change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren’t optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:
The “beware suspicious convergence” argument implies that it’s unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
You’ve pointed out a potential problem with the second bullet point. I think it’s plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don’t think they would by themselves succeed in defeating the first bullet point.