On Progress and Prosperity

I of­ten en­counter the fol­low­ing ar­gu­ment, or a var­i­ant of it:

His­tor­i­cally, tech­nolog­i­cal, eco­nomic, and so­cial progress have been as­so­ci­ated with sig­nifi­cant gains in qual­ity of life and sig­nifi­cant im­prove­ment in so­ciety’s abil­ity to cope with challenges. All else equal, these trends should be ex­pected to con­tinue, and so con­tri­bu­tions to tech­nolog­i­cal, eco­nomic, and so­cial progress should be ex­pected to be very valuable.

I en­counter this ar­gu­ment from a wide range of per­spec­tives, in­clud­ing most of the so­cial cir­cles I in­ter­act with other than the LessWrong com­mu­nity (aca­demics, friends from school, philan­thropists, en­g­ineers in the bay area). For ex­am­ple, Holden Karnofsky writes about the gen­eral pos­i­tive effects of progress here (I agree with many of these points). I think that similar rea­son­ing in­forms peo­ple’s views more of­ten than it is ac­tu­ally ar­tic­u­lated.

I dis­agree with this ar­gu­ment. This dis­agree­ment ap­pears to be re­spon­si­ble for many of my other con­trar­ian views, and to have sig­nifi­cant con­se­quences for my al­tru­is­tic pri­ori­ties; I dis­cuss some con­crete con­se­quences at the end of the post. (The short sum­mary is that I con­sider differ­en­tial in­tel­lec­tual progress to be an or­der of mag­ni­tude more im­por­tant than ab­solute in­tel­lec­tual progress.) In the body of this post I want to make my po­si­tion as clear as pos­si­ble.

My im­pres­sion is that I dis­agree with the con­ven­tional view be­cause (1) I take the long-term per­spec­tive much more se­ri­ously than most peo­ple, (2) I have thought at more length about this ques­tion than most peo­ple. But over­all I re­main a bit hes­i­tant in this view due to its un­pop­u­lar­ity. Note that my view is com­mon in the LessWrong crowd and has been ar­gued for el­se­where. In gen­eral I en­dorse sig­nifi­cant skep­ti­cism for views which are com­mon amongst LessWrong but un­pop­u­lar in the wider world (though I think this one is un­usu­ally solid).

Values

I sus­pect that one rea­son I dis­agree with con­ven­tional wis­dom is that I con­sider the welfare of in­di­vi­d­ual fu­ture peo­ple to be nearly as valuable as the welfare of ex­ist­ing peo­ple, and con­se­quently the col­lec­tive welfare of fu­ture peo­ple to be sub­stan­tially more im­por­tant than the welfare of ex­ist­ing peo­ple.

In par­tic­u­lar, I think the origi­nal ar­gu­ment is ac­cu­rate—and a dom­i­nant con­sid­er­a­tion—if we re­strict our at­ten­tion to peo­ple liv­ing over the next 100 years, and per­haps even the next 500 years. (In­ci­den­tally, most se­ri­ous in­tel­lec­tu­als ap­pear to con­sider it un­rea­son­able to have a view that dis­crim­i­nates be­tween “Good for the peo­ple liv­ing over the next 500 years” and “Good for peo­ple over­all.”)

Some peo­ple who raise this ar­gu­ment con­sider the welfare of far fu­ture peo­ple to be of du­bi­ous or re­duced moral value. But many peo­ple who raise this ar­gu­ment pur­port to share a long-term, risk-neu­tral, ag­grega­tive per­spec­tive. I think that this lat­ter group is mak­ing an em­piri­cal er­ror, which is what I want to ad­dress here.

In­ci­den­tally, I hope that in the fu­ture the EA crowd adopts a more rea­son­able com­pro­mise be­tween long-term, species-ag­nos­tic, risk-neu­tral util­i­tar­i­anism, and more nor­mal-look­ing in­tu­itions that by be­hav­ing morally we can col­lec­tively make all of our lives much bet­ter. It seems most EA’s grant that there is a place for self­ish­ness, but of­ten re­ject con­ven­tional be­hav­iors which col­lec­tively benefit the mod­ern de­vel­oped world.

I think that part of the re­sis­tance to anti-progress ar­gu­ments comes from the de­sire to re­cover con­ven­tional pro-so­cial be­hav­ior, with­out ex­plicit recog­ni­tion of that goal.

This is a painful disagreement

This is a painful dis­agree­ment for me for two rea­sons.

First, I be­lieve that so­ciety at large sub­stan­tially un­der­es­ti­mates the welfare gains from eco­nomic and tech­nolog­i­cal progress. In­deed, I think that given an ex­clu­sive con­cern for the next few gen­er­a­tions, these should prob­a­bly be over­whelming con­cerns of a would-be al­tru­ist. I could talk at length about this view and the er­rors which I think un­derly con­ven­tional views, but it would be a di­gres­sion.

In light of this, I find it ex­tremely un­pleas­ant to find my­self on the anti-progress side of al­most any ar­gu­ment. First, be­cause I think that some­one sen­si­ble who learns my po­si­tion will ra­tio­nally as­sume that I am guilty of the most com­mon er­rors re­spon­si­ble for the po­si­tion, rather than mak­ing a hero­ically char­i­ta­ble as­sump­tion. Se­cond, I have a visceral de­sire to ar­gue for what I think is right (I hear the call of some­one be­ing wrong on the in­ter­net), and in most ev­ery­day dis­cus­sions that means ar­gu­ing for the mer­its of tech­nolog­i­cal and eco­nomic progress.

Se­cond, I think that pur­su­ing plans which re­sult in sub­stan­tially slower growth comes at a ma­te­rial ex­pense for the peo­ple al­ive to­day, and es­pe­cially for their chil­dren and grand­chil­dren. For the same rea­son that I would be un­com­fortable hurt­ing those around me for per­sonal ad­van­tage, I am un­com­fortable hurt­ing those around me in the ser­vice of util­i­tar­ian ends (a prob­lem much ex­ac­er­bated by the ero­sion of the act-omis­sion dis­tinc­tion).

[In fact I mostly do try to be a nice guy; in part this is due to the good effects of not-be­ing-a-jerk (which are of­ten sub­stan­tial), but it’s also largely due to a soft­en­ing of the ag­gre­gate-util­i­tar­ian per­spec­tive and a de­ci­sion-the­o­retic view par­tially cal­ibrated to re­pro­duce in­tu­itions about what we ought to do.]

Why I disagree

For refer­ence, the ar­gu­ment in ques­tion:

His­tor­i­cally, tech­nolog­i­cal, eco­nomic, and so­cial progress have been as­so­ci­ated with sig­nifi­cant gains in qual­ity of life and sig­nifi­cant im­prove­ment in so­ciety’s abil­ity to cope with challenges. All else equal, these trends should be ex­pected to con­tinue, and so con­tri­bu­tions to tech­nolog­i­cal, eco­nomic, and so­cial progress should be con­sid­ered highly valuable.

[Meta]

This is an in­stance of the gen­eral schema “In the past we have ob­served an as­so­ci­a­tion be­tween X [progress] and Y [good­ness]. This sug­gests that X is gen­er­ally as­so­ci­ated with Y, and in par­tic­u­lar that this fu­ture in­stance of X will be as­so­ci­ated with Y.”

I have no prob­lem with this schema in gen­eral, nor with this ar­gu­ment in par­tic­u­lar. One way of re­spond­ing to such an ar­gu­ment is to offer a clear ex­pla­na­tion of why X and Y have been as­so­ci­ated in the ob­served cases. This then screens off the ev­i­dence about the gen­eral as­so­ci­a­tion of X with Y; if the clear ex­pla­na­tion doesn’t pre­dict that X and Y will be as­so­ci­ated in the fu­ture, this un­der­mines the pre­dicted as­so­ci­a­tion.

[Ob­ject level]

In this case, it seems clear that greater tech­nolog­i­cal ca­pa­bil­ities at time T lead to im­proved qual­ity of life at time T. This is a very sim­ple ob­ser­va­tion, ro­bustly sup­ported by the his­tor­i­cal record. More­over, it is also clear that im­proved tech­nolog­i­cal ca­pa­bil­ities at time T lead to im­proved tech­nolog­i­cal ca­pa­bil­ities at time T+1. And I could make similar state­ments for eco­nomic progress, and ar­guably for so­cial progress.

Once we ac­cept this, we have a clear ex­pla­na­tion of why faster progress leads to im­prove­ments in qual­ity of life. There is no mys­te­ri­ous cor­re­la­tion to be ex­plained.

So now we might ask: do the same mechanisms sug­gest that tech­nolog­i­cal progress will be good over­all, on ag­gre­gate util­i­tar­ian grounds?

The an­swer ap­pears to me to be no.

It seems clear that eco­nomic, tech­nolog­i­cal, and so­cial progress are limited, and that ma­te­rial progress on these di­men­sions must stop long be­fore hu­man so­ciety has run its course. That is, the re­la­tion­ship be­tween progress at time T and progress at time T+1 will break down even­tu­ally. For ex­am­ple, if ex­po­nen­tial growth con­tinued at 1% of its cur­rent rate for 1% of the re­main­ing life­time of our sun, Robin Han­son points out each atom in our galaxy would need to be about 10140 times as valuable as mod­ern so­ciety. In­deed, un­less our cur­rent un­der­stand­ing of the laws of physics, progress will even­tu­ally nec­es­sar­ily slow to an ex­tremely mod­est rate by any mean­ingful mea­sure.

So while fur­ther progress to­day in­creases our cur­rent qual­ity of life, it will not in­crease the qual­ity of life of our dis­tant de­scen­dants—they will live in a world that is “sat­u­rated,” where progress has run its course and has only very mod­est fur­ther effects.

I think this is suffi­cient to re­spond to the origi­nal ar­gu­ment: we have seen progress as­so­ci­ated with good out­comes, and we have a rel­a­tively clear un­der­stand­ing of the mechanism by which that has oc­curred. We can see pretty clearly that this par­tic­u­lar mechanism doesn’t have much effect on very long-term out­comes.

[Some re­sponses]

1. Maybe so­ciety will en­counter prob­lems in the fu­ture which will have an effect on the long-term con­di­tions of hu­man so­ciety, and their abil­ity to solve those prob­lems will de­pend on lev­els of de­vel­op­ment when they are en­coun­tered?

[My first re­sponse to all of these con­sid­er­a­tions is re­ally “Maybe, but you’re no longer in the regime of ex­trap­o­lat­ing from past progress. I don’t think that this is sug­gested by the fact that sci­ence has made our lives so much bet­ter and cured dis­ease.” But to be a good sport I’ll an­swer them here any­way.]

For bet­ter or worse, al­most all prob­lems with the po­ten­tial to per­ma­nently change hu­man af­fairs are of our own mak­ing. There are nat­u­ral dis­asters, pe­ri­odic as­ter­oid im­pacts, dis­eases and great die-offs; there is ag­ing and nat­u­ral cli­mate change and the grad­ual burn­ing out of the stars. But com­pared to hu­man ac­tivity all of those events are slow. The risk of ex­tinc­tion from as­ter­oids each year is very small, fast cli­mate change is driven pri­mar­ily by hu­man ac­tivity, and the stars burn down at a glacial pace.

The abil­ity to per­ma­nently al­ter the fu­ture is al­most en­tirely driven by tech­nolog­i­cal progress and the use of ex­ist­ing tech­nolo­gies. With the pos­si­ble ex­cep­tions of an­thro­pogenic cli­mate change and a par­tic­u­larly bad nu­clear war, we barely even have the abil­ity to re­ally mess things up to­day: it ap­pears that al­most all of the risk of things go­ing ter­ribly and ir­re­vo­ca­bly awry lies in our fu­ture. Has­ten­ing tech­nolog­i­cal progress im­proves our abil­ity to cope with prob­lems, but it also has­tens the ar­rival of the prob­lems at al­most the same rate.

2. Why should progress con­tinue in­definitely? Maybe there will be progress un­til 2100, and the level of so­phis­ti­ca­tion in that year will de­ter­mine the en­tire fu­ture?

This sce­nario just seems to strain plau­si­bil­ity. Again, al­most all ways that progress could plau­si­bly stop don’t de­pend on the cal­en­dar year, but are driven by hu­man ac­tivi­ties (and pre­sum­ably some in­ter­me­di­at­ing tech­nolog­i­cal progress).

3. Might faster progress beget more progress and a more func­tional so­ciety, which will be bet­ter able to deal with the prob­lems that arise at each fixed level of de­vel­op­ment?

I think this is an in­ter­est­ing dis­cus­sion but I don’t think it has any plau­si­ble claim to a “ro­bust” or “non-spec­u­la­tive” ar­gu­ment, or to be a pri­mary con­sid­er­a­tion in what out­comes are de­sir­able. In par­tic­u­lar, you can’t jus­tify this kind of thing with “Progress seems to have been good so far,” you need to run a much more so­phis­ti­cated his­tor­i­cal coun­ter­fac­tual, and prob­a­bly you need to start get­ting into spec­u­lat­ing about causal mechanisms if you ac­tu­ally want the story to be con­vinc­ing. Note that you need to dis­t­in­guish wealth-re­lated effects (which don’t de­pend on how fast wealth is ac­cu­mu­lated, and con­se­quently don’t af­fect our abil­ity to ad­dress prob­lems at each fixed level of de­vel­op­ment) from rate-of-progress re­lated effects, which seems em­piri­cally treach­er­ous (not to men­tion the greater su­perfi­cial plau­si­bil­ity of wealth effects).

In par­tic­u­lar I might note that tech­nolog­i­cal progress seems to have pro­ceded es­sen­tially con­tin­u­ously for the last 1000 or so years, with pe­ri­odic set­backs but no ap­par­ent risk of stal­ling or backpedal­ing (out­side of small iso­lated pop­u­la­tions). Without the risk of an in­definite stag­na­tion lead­ing to even­tual ex­tinc­tion, it’s not re­ally clear why mo­men­tum effects would have a pos­i­tive long-term im­pact (this seems to be beg­ging the ques­tion). It is more clear how peo­ple be­ing nicer could help, and I grant that there is some ev­i­dence for faster progress lead­ing to nice­ness, but I think this is definitely in the rel­a­tively spec­u­la­tive regime.

[An al­ter­na­tive story]

An al­ter­na­tive story is that while progress has a mod­est pos­i­tive effect on long-term welfare, this effect is rad­i­cally smaller than the ob­served medium-term effects, and in par­tic­u­lar much smaller than differ­en­tial progress. Mag­i­cally re­plac­ing the world of 1800 with the world of 1900 would make the cal­en­dar years 1800-1900 a lot more fun, but in the long run all of the same things hap­pen (just 100 years sooner).

That is, if most prob­lems that peo­ple will face are of their own cre­ation, we might be more in­ter­ested in the rel­a­tive rate at which peo­ple cre­ate prob­lems (or ac­quire the abil­ity to cre­ate them) vs. re­solve prob­lems (or ac­quire the abil­ity to re­solve them). Such rel­a­tive rates in progress would be much more im­por­tant than an over­all speedup in tech­nolog­i­cal, eco­nomic, and so­cial progress. And more­over, we can’t use the fact that X has been good for qual­ity of life his­tor­i­cally in or­der to say any­thing about which side of the ledger it comes down on.

I’d like to note that this is not an ar­gu­ment about AI or about any par­tic­u­lar fu­ture sce­nario. It’s an ar­gu­ment that I could have made just as well in 1500 (ex­cept in­so­far as nat­u­ral phe­nom­ena have be­come even less con­cern­ing now than they were in 1500). And the ob­ser­va­tions since 1500 don’t seem to dis­credit this model at all. Pre­dic­tions only di­verge re­gard­ing what hap­pens af­ter qual­ity of life stops in­creas­ing from tech­nolog­i­cal progress.

This might op­er­ate at the level of e.g. differ­en­tial tech­nolog­i­cal de­vel­op­ment, so that some kinds of tech­nolog­i­cal progress cre­ate value while oth­ers de­stroy it; or it might op­er­ate at a higher level, so that e.g. the ac­cu­mu­la­tion of wealth de­stroys value while tech­nolog­i­cal progress cre­ates it (if we thought it was bet­ter to be as poor as pos­si­ble given our level of tech­nolog­i­cal so­phis­ti­ca­tion). Or we might think that pop­u­la­tion growth is bad while ev­ery­thing else is good, or what­ever.

The key mes­sage is that when we com­pare our situ­a­tion to the situ­a­tion of last cen­tury, we mostly ob­serve the over­all benefits of progress, but that on a long-run per­spec­tive these over­all benefits are likely to be much smaller than the differ­ence be­tween “good stuff” and “bad stuff.”

(For a more fleshed out ver­sion of this story, see again Nick Beck­stead’s the­sis or this pre­sen­ta­tion.)

Implications

Why does any of this mat­ter? A few ran­dom im­pli­ca­tions:

  • I sus­pect that ad­dress­ing poverty is good for the over­all pace of progress, and for the welfare of peo­ple over the next 200 years. But I don’t see much rea­son to think that it will make our so­ciety bet­ter in the very long-run, and I think that the ar­gu­ments to this effect are quite spec­u­la­tive. For ex­am­ple, I think they are much more spec­u­la­tive than ar­gu­ments offered for more ex­plic­itly fu­ture-shap­ing in­ter­ven­tions. The same can be said for many other com­mon-sense in­ter­ven­tions.

  • I think that faster AI progress is a huge boon for this gen­er­a­tion’s welfare. But I think that im­prov­ing our un­der­stand­ing of where AI is go­ing and how it will de­velop is prob­a­bly more im­por­tant, be­cause that re­duces the prob­a­bil­ity that the de­vel­op­ment of AI un­folds in an un­fa­vor­able way rather than merely ac­cel­er­at­ing its ar­rival.

  • I think that im­prove­ments in de­ci­sion-mak­ing ca­pa­bil­ities or (prob­a­bly) in­tel­li­gence are more im­por­tant than other pro­duc­tivity benefits, e.g. the benefits of au­toma­tion, and so tend to fo­cus on cog­ni­tive en­hance­ment or im­prove­ments col­lec­tive de­ci­sion-mak­ing rather than the more con­ven­tional menu of en­trepreneurial pro­jects.

I don’t think that any of these are par­tic­u­larly spec­u­la­tive or wild propo­si­tions, but of­ten peo­ple ar­gu­ing against in­vest­ment in differ­en­tial progress seem to have un­rea­son­ably high ex­pec­ta­tions. For ex­am­ple, I ex­pect un­der­stand­ing-where-AI-is-go­ing to have a much smaller effect on the world than helping-AI-get-there, but don’t think that is a suffi­cient ar­gu­ment against it.