I have two comments concerning your arguments against accelerating growth in poor countries. One is more “inside view”, the other is more “outside view”.
The “inside view” point is that Christiano’s estimate only takes into account the “price of a life saved”. But in truth GiveWell’s recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)
The “outside view” point is that I find our epistemology really shaky and worrisome. Let me transpose the question into AI safety to illustrate that the point is not related to growth interventions. If I want to make progress on AI safety, maybe I can try directly to “solve AI alignment”. Let’s say that I hesitate between this and trying to improve the reliability of current-day AI algorithms. I feel that, at least in casual conversations (perhaps especially from people who are not actually working in the area), people would be all too willing to jump to “of course the first option is much better because this is the real problem, if it succeeds we win”. But in truth there is a tradeoff with being able to make any progress at all, it is not just better to turn your attention to the most maximally long-term thing you can think of. And, I think it is extremely useful to have some feedback loop that allows you to track what you are doing, and by necessity this feedback loop will be somewhat short term. To summarize, I believe that there is a “sweet spot” where you choose to focus on things that seem to point in the right direction and also allow you at least some modicum of feedback over shorter time scales.
Now, consider the argument “this intervention cannot be optimal in the long run because it has been optimized for the short term”. This argument essentially allows you to reject any intervention that has shown great promise based on the observations we can gather. So, effective altruism started as being “evidence based” etc., and now we reached a situation where we have built a theoretical construct that, not only allows to place certain interventions above all others without us having to give any empirical evidence backing this, but moreover, if another intervention is proposed that comes with good empirical backing, we can use this fact as an argument against the intervention!
I may be pushing the argument a bit too far. This still makes me feel very uncomfortable.
Let me call X the statement: “our rate of improvement remains bounded away from zero far into the future”. If I understand correctly, you are saying that we have great difficulties imagining a scenario where X happens, therefore X is very unlikely.
Human imagination is very limited. For instance, most of human history shows very little change from one generation to the next; in other words, people were not able to imagine ways for future generations to do certain things in better ways than how they already knew. Here you ask our imagination to perform a spectacularly difficult task, namely to imagine what extremely advanced civilizations are likely to be doing in billions of years. I am not surprised if we do not manage to produce a credible scenario where X occurs. I do not take this as strong evidence against X.
Separately from this, I personally do not find it very likely that we will ultimately settle most of the accessible universe, as you suppose, because I would be surprised if human beings hold such a special position. (In my opinion, either advanced civilizations are not so interested in expanding in space; or else, we will at some point meet a much more advanced civilization, and our trajectory after this point will probably depend little on what we can do before it.)
Concerning the point you put in parentheses about safety being “infinitely” preferred, I meant to use phrases such as “virtually infinitely preferred” to convey that the preference is so strong that any actual empirical estimate is considered unnecessary. In footnote 5 above, I mentioned this 80k article intended to summarize the views of the EA community, where it is said that speedup interventions are “essentially morally neutral” (which, given the context, I take as being equivalent to saying that risk mitigation is essentially infinitely preferred).