I have two comments concerning your arguments against accelerating growth in poor countries. One is more “inside view”, the other is more “outside view”.
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
The “outside view” point is that I find our epistemology really shaky and worrisome. Let me transpose the question into AI safety to illustrate that the point is not related to growth interventions. If I want to make progress on AI safety, maybe I can try directly to “solve AI alignment”. Let’s say that I hesitate between this and trying to improve the reliability of current-day AI algorithms. I feel that, at least in casual conversations (perhaps especially from people who are not actually working in the area), people would be all too willing to jump to “of course the first option is much better because this is the real problem, if it succeeds we win”. But in truth there is a tradeoff with being able to make any progress at all, it is not just better to turn your attention to the most maximally long-term thing you can think of. And, I think it is extremely useful to have some feedback loop that allows you to track what you are doing, and by necessity this feedback loop will be somewhat short term. To summarize, I believe that there is a “sweet spot” where you choose to focus on things that seem to point in the right direction and also allow you at least some modicum of feedback over shorter time scales.
Now, consider the argument “this intervention cannot be optimal in the long run because it has been optimized for the short term”. This argument essentially allows you to reject any intervention that has shown great promise based on the observations we can gather. So, effective altruism started as being “evidence based” etc., and now we reached a situation where we have built a theoretical construct that, not only allows to place certain interventions above all others without us having to give any empirical evidence backing this, but moreover, if another intervention is proposed that comes with good empirical backing, we can use this fact as an argument against the intervention!
I may be pushing the argument a bit too far. This still makes me feel very uncomfortable.
Regarding your “outside view” point: I agree with what you say here, but think it cannot directly undermine my original “outside view” argument. These clarifications may explain why:
My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not “global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth”; instead it is “an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y”.
In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
In general, I think that selecting an intervention that’s optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics “just focus on the most important long-term outcome/risk” when selecting longtermist interventions, just as it would be bad to just rely on the heuristics “work on fighting whatever disease has the largest disease burden globally” when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don’t think that my original outside view argument commits me to the conclusion that in the domain of AI safety it’s best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn’t be sufficient). For example, in this case, such an argument could be of the type “our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries” (I’m not saying this claim is true, just that I would want to see something like this).
Ok, I understand your point better now, and find that it makes sense. To summarize, I believe that the art of good planning to a distant goal is to find a series of intermediate targets that we can focus on, one after the other. I was worried that your argument could be used against any such strategy. But in fact your point is that as it stands, health interventions have not been selected by a “planner” who was actually thinking about the long-term goals, so it is unlikely that the selected interventions are the best we can find. That sounds reasonable to me. I would really like to see more research into what optimizing for long-term growth could look like (and what kind of “intermediate targets” this would select). (There is some of this in Christiano’s post, but there is clearly room for more in-depth analysis in my opinion.)
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
I think this is a fair point. Specifically, I agree that GiveWell’s recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven’t looked at Paul Christiano’s post in sufficient detail to say for sure, but I agree it’s plausible that this way of using “price of a life saved” calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell’s analysis.
I would need to think about this more to form a considered view, but my guess is this wouldn’t change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren’t optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:
The “beware suspicious convergence” argument implies that it’s unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
You’ve pointed out a potential problem with the second bullet point. I think it’s plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don’t think they would by themselves succeed in defeating the first bullet point.
I have two comments concerning your arguments against accelerating growth in poor countries. One is more “inside view”, the other is more “outside view”.
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
The “outside view” point is that I find our epistemology really shaky and worrisome. Let me transpose the question into AI safety to illustrate that the point is not related to growth interventions. If I want to make progress on AI safety, maybe I can try directly to “solve AI alignment”. Let’s say that I hesitate between this and trying to improve the reliability of current-day AI algorithms. I feel that, at least in casual conversations (perhaps especially from people who are not actually working in the area), people would be all too willing to jump to “of course the first option is much better because this is the real problem, if it succeeds we win”. But in truth there is a tradeoff with being able to make any progress at all, it is not just better to turn your attention to the most maximally long-term thing you can think of. And, I think it is extremely useful to have some feedback loop that allows you to track what you are doing, and by necessity this feedback loop will be somewhat short term. To summarize, I believe that there is a “sweet spot” where you choose to focus on things that seem to point in the right direction and also allow you at least some modicum of feedback over shorter time scales.
Now, consider the argument “this intervention cannot be optimal in the long run because it has been optimized for the short term”. This argument essentially allows you to reject any intervention that has shown great promise based on the observations we can gather. So, effective altruism started as being “evidence based” etc., and now we reached a situation where we have built a theoretical construct that, not only allows to place certain interventions above all others without us having to give any empirical evidence backing this, but moreover, if another intervention is proposed that comes with good empirical backing, we can use this fact as an argument against the intervention!
I may be pushing the argument a bit too far. This still makes me feel very uncomfortable.
Regarding your “outside view” point: I agree with what you say here, but think it cannot directly undermine my original “outside view” argument. These clarifications may explain why:
My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not “global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth”; instead it is “an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y”.
In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
In general, I think that selecting an intervention that’s optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics “just focus on the most important long-term outcome/risk” when selecting longtermist interventions, just as it would be bad to just rely on the heuristics “work on fighting whatever disease has the largest disease burden globally” when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don’t think that my original outside view argument commits me to the conclusion that in the domain of AI safety it’s best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn’t be sufficient). For example, in this case, such an argument could be of the type “our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries” (I’m not saying this claim is true, just that I would want to see something like this).
Ok, I understand your point better now, and find that it makes sense. To summarize, I believe that the art of good planning to a distant goal is to find a series of intermediate targets that we can focus on, one after the other. I was worried that your argument could be used against any such strategy. But in fact your point is that as it stands, health interventions have not been selected by a “planner” who was actually thinking about the long-term goals, so it is unlikely that the selected interventions are the best we can find. That sounds reasonable to me. I would really like to see more research into what optimizing for long-term growth could look like (and what kind of “intermediate targets” this would select). (There is some of this in Christiano’s post, but there is clearly room for more in-depth analysis in my opinion.)
.
I think this is a fair point. Specifically, I agree that GiveWell’s recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven’t looked at Paul Christiano’s post in sufficient detail to say for sure, but I agree it’s plausible that this way of using “price of a life saved” calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell’s analysis.
I would need to think about this more to form a considered view, but my guess is this wouldn’t change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren’t optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:
The “beware suspicious convergence” argument implies that it’s unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
You’ve pointed out a potential problem with the second bullet point. I think it’s plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don’t think they would by themselves succeed in defeating the first bullet point.