On Progress and Prosperity

I often encounter the following argument, or a variant of it:

Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society’s ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be expected to be very valuable.

I encounter this argument from a wide range of perspectives, including most of the social circles I interact with other than the LessWrong community (academics, friends from school, philanthropists, engineers in the bay area). For example, Holden Karnofsky writes about the general positive effects of progress here (I agree with many of these points). I think that similar reasoning informs people’s views more often than it is actually articulated.

I disagree with this argument. This disagreement appears to be responsible for many of my other contrarian views, and to have significant consequences for my altruistic priorities; I discuss some concrete consequences at the end of the post. (The short summary is that I consider differential intellectual progress to be an order of magnitude more important than absolute intellectual progress.) In the body of this post I want to make my position as clear as possible.

My impression is that I disagree with the conventional view because (1) I take the long-term perspective much more seriously than most people, (2) I have thought at more length about this question than most people. But overall I remain a bit hesitant in this view due to its unpopularity. Note that my view is common in the LessWrong crowd and has been argued for elsewhere. In general I endorse significant skepticism for views which are common amongst LessWrong but unpopular in the wider world (though I think this one is unusually solid).

Values

I suspect that one reason I disagree with conventional wisdom is that I consider the welfare of individual future people to be nearly as valuable as the welfare of existing people, and consequently the collective welfare of future people to be substantially more important than the welfare of existing people.

In particular, I think the original argument is accurate—and a dominant consideration—if we restrict our attention to people living over the next 100 years, and perhaps even the next 500 years. (Incidentally, most serious intellectuals appear to consider it unreasonable to have a view that discriminates between “Good for the people living over the next 500 years” and “Good for people overall.”)

Some people who raise this argument consider the welfare of far future people to be of dubious or reduced moral value. But many people who raise this argument purport to share a long-term, risk-neutral, aggregative perspective. I think that this latter group is making an empirical error, which is what I want to address here.

Incidentally, I hope that in the future the EA crowd adopts a more reasonable compromise between long-term, species-agnostic, risk-neutral utilitarianism, and more normal-looking intuitions that by behaving morally we can collectively make all of our lives much better. It seems most EA’s grant that there is a place for selfishness, but often reject conventional behaviors which collectively benefit the modern developed world.

I think that part of the resistance to anti-progress arguments comes from the desire to recover conventional pro-social behavior, without explicit recognition of that goal.

This is a painful disagreement

This is a painful disagreement for me for two reasons.

First, I believe that society at large substantially underestimates the welfare gains from economic and technological progress. Indeed, I think that given an exclusive concern for the next few generations, these should probably be overwhelming concerns of a would-be altruist. I could talk at length about this view and the errors which I think underly conventional views, but it would be a digression.

In light of this, I find it extremely unpleasant to find myself on the anti-progress side of almost any argument. First, because I think that someone sensible who learns my position will rationally assume that I am guilty of the most common errors responsible for the position, rather than making a heroically charitable assumption. Second, I have a visceral desire to argue for what I think is right (I hear the call of someone being wrong on the internet), and in most everyday discussions that means arguing for the merits of technological and economic progress.

Second, I think that pursuing plans which result in substantially slower growth comes at a material expense for the people alive today, and especially for their children and grandchildren. For the same reason that I would be uncomfortable hurting those around me for personal advantage, I am uncomfortable hurting those around me in the service of utilitarian ends (a problem much exacerbated by the erosion of the act-omission distinction).

[In fact I mostly do try to be a nice guy; in part this is due to the good effects of not-being-a-jerk (which are often substantial), but it’s also largely due to a softening of the aggregate-utilitarian perspective and a decision-theoretic view partially calibrated to reproduce intuitions about what we ought to do.]

Why I disagree

For reference, the argument in question:

Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society’s ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be considered highly valuable.

[Meta]

This is an instance of the general schema “In the past we have observed an association between X [progress] and Y [goodness]. This suggests that X is generally associated with Y, and in particular that this future instance of X will be associated with Y.”

I have no problem with this schema in general, nor with this argument in particular. One way of responding to such an argument is to offer a clear explanation of why X and Y have been associated in the observed cases. This then screens off the evidence about the general association of X with Y; if the clear explanation doesn’t predict that X and Y will be associated in the future, this undermines the predicted association.

[Object level]

In this case, it seems clear that greater technological capabilities at time T lead to improved quality of life at time T. This is a very simple observation, robustly supported by the historical record. Moreover, it is also clear that improved technological capabilities at time T lead to improved technological capabilities at time T+1. And I could make similar statements for economic progress, and arguably for social progress.

Once we accept this, we have a clear explanation of why faster progress leads to improvements in quality of life. There is no mysterious correlation to be explained.

So now we might ask: do the same mechanisms suggest that technological progress will be good overall, on aggregate utilitarian grounds?

The answer appears to me to be no.

It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course. That is, the relationship between progress at time T and progress at time T+1 will break down eventually. For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society. Indeed, unless our current understanding of the laws of physics, progress will eventually necessarily slow to an extremely modest rate by any meaningful measure.

So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants—they will live in a world that is “saturated,” where progress has run its course and has only very modest further effects.

I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn’t have much effect on very long-term outcomes.

[Some responses]

1. Maybe society will encounter problems in the future which will have an effect on the long-term conditions of human society, and their ability to solve those problems will depend on levels of development when they are encountered?

[My first response to all of these considerations is really “Maybe, but you’re no longer in the regime of extrapolating from past progress. I don’t think that this is suggested by the fact that science has made our lives so much better and cured disease.” But to be a good sport I’ll answer them here anyway.]

For better or worse, almost all problems with the potential to permanently change human affairs are of our own making. There are natural disasters, periodic asteroid impacts, diseases and great die-offs; there is aging and natural climate change and the gradual burning out of the stars. But compared to human activity all of those events are slow. The risk of extinction from asteroids each year is very small, fast climate change is driven primarily by human activity, and the stars burn down at a glacial pace.

The ability to permanently alter the future is almost entirely driven by technological progress and the use of existing technologies. With the possible exceptions of anthropogenic climate change and a particularly bad nuclear war, we barely even have the ability to really mess things up today: it appears that almost all of the risk of things going terribly and irrevocably awry lies in our future. Hastening technological progress improves our ability to cope with problems, but it also hastens the arrival of the problems at almost the same rate.

2. Why should progress continue indefinitely? Maybe there will be progress until 2100, and the level of sophistication in that year will determine the entire future?

This scenario just seems to strain plausibility. Again, almost all ways that progress could plausibly stop don’t depend on the calendar year, but are driven by human activities (and presumably some intermediating technological progress).

3. Might faster progress beget more progress and a more functional society, which will be better able to deal with the problems that arise at each fixed level of development?

I think this is an interesting discussion but I don’t think it has any plausible claim to a “robust” or “non-speculative” argument, or to be a primary consideration in what outcomes are desirable. In particular, you can’t justify this kind of thing with “Progress seems to have been good so far,” you need to run a much more sophisticated historical counterfactual, and probably you need to start getting into speculating about causal mechanisms if you actually want the story to be convincing. Note that you need to distinguish wealth-related effects (which don’t depend on how fast wealth is accumulated, and consequently don’t affect our ability to address problems at each fixed level of development) from rate-of-progress related effects, which seems empirically treacherous (not to mention the greater superficial plausibility of wealth effects).

In particular I might note that technological progress seems to have proceded essentially continuously for the last 1000 or so years, with periodic setbacks but no apparent risk of stalling or backpedaling (outside of small isolated populations). Without the risk of an indefinite stagnation leading to eventual extinction, it’s not really clear why momentum effects would have a positive long-term impact (this seems to be begging the question). It is more clear how people being nicer could help, and I grant that there is some evidence for faster progress leading to niceness, but I think this is definitely in the relatively speculative regime.

[An alternative story]

An alternative story is that while progress has a modest positive effect on long-term welfare, this effect is radically smaller than the observed medium-term effects, and in particular much smaller than differential progress. Magically replacing the world of 1800 with the world of 1900 would make the calendar years 1800-1900 a lot more fun, but in the long run all of the same things happen (just 100 years sooner).

That is, if most problems that people will face are of their own creation, we might be more interested in the relative rate at which people create problems (or acquire the ability to create them) vs. resolve problems (or acquire the ability to resolve them). Such relative rates in progress would be much more important than an overall speedup in technological, economic, and social progress. And moreover, we can’t use the fact that X has been good for quality of life historically in order to say anything about which side of the ledger it comes down on.

I’d like to note that this is not an argument about AI or about any particular future scenario. It’s an argument that I could have made just as well in 1500 (except insofar as natural phenomena have become even less concerning now than they were in 1500). And the observations since 1500 don’t seem to discredit this model at all. Predictions only diverge regarding what happens after quality of life stops increasing from technological progress.

This might operate at the level of e.g. differential technological development, so that some kinds of technological progress create value while others destroy it; or it might operate at a higher level, so that e.g. the accumulation of wealth destroys value while technological progress creates it (if we thought it was better to be as poor as possible given our level of technological sophistication). Or we might think that population growth is bad while everything else is good, or whatever.

The key message is that when we compare our situation to the situation of last century, we mostly observe the overall benefits of progress, but that on a long-run perspective these overall benefits are likely to be much smaller than the difference between “good stuff” and “bad stuff.”

(For a more fleshed out version of this story, see again Nick Beckstead’s thesis or this presentation.)

Implications

Why does any of this matter? A few random implications:

  • I suspect that addressing poverty is good for the overall pace of progress, and for the welfare of people over the next 200 years. But I don’t see much reason to think that it will make our society better in the very long-run, and I think that the arguments to this effect are quite speculative. For example, I think they are much more speculative than arguments offered for more explicitly future-shaping interventions. The same can be said for many other common-sense interventions.

  • I think that faster AI progress is a huge boon for this generation’s welfare. But I think that improving our understanding of where AI is going and how it will develop is probably more important, because that reduces the probability that the development of AI unfolds in an unfavorable way rather than merely accelerating its arrival.

  • I think that improvements in decision-making capabilities or (probably) intelligence are more important than other productivity benefits, e.g. the benefits of automation, and so tend to focus on cognitive enhancement or improvements collective decision-making rather than the more conventional menu of entrepreneurial projects.

I don’t think that any of these are particularly speculative or wild propositions, but often people arguing against investment in differential progress seem to have unreasonably high expectations. For example, I expect understanding-where-AI-is-going to have a much smaller effect on the world than helping-AI-get-there, but don’t think that is a sufficient argument against it.