A relatively atheoretical perspective on astronomical waste

Cross­posted from the Global Pri­ori­ties Pro­ject

Introduction

It is com­monly ob­jected that the “long-run” per­spec­tive on effec­tive al­tru­ism rests on es­o­teric as­sump­tions from moral philos­o­phy that are highly de­bat­able. Yes, the long-term fu­ture may over­whelm ag­gre­gate welfare con­sid­er­a­tions, but does it fol­low that the long-term fu­ture is over­whelm­ingly im­por­tant? Do I re­ally want my plan for helping the world to rest on the as­sump­tion that the benefit from al­low­ing ex­tra peo­ple to ex­ist scales lin­early with pop­u­la­tion when large num­bers of ex­tra peo­ple are al­lowed to ex­ist?

In my dis­ser­ta­tion on this topic, I tried to defend the con­clu­sion that the dis­tant fu­ture is over­whelm­ingly im­por­tant with­out com­mit­ting to a highly spe­cific view about pop­u­la­tion ethics (such as to­tal util­i­tar­i­anism). I did this by ap­peal­ing to more gen­eral prin­ci­ples, but I did end up delv­ing pretty deeply into some stan­dard philo­soph­i­cal is­sues re­lated to pop­u­la­tion ethics. And I don’t see how to avoid that if you want to in­de­pen­dently eval­u­ate whether it’s over­whelm­ingly im­por­tant for hu­man­ity to sur­vive in the long-term fu­ture (rather than, say, just defer­ring to com­mon sense).

In this post, I out­line a rel­a­tively athe­o­ret­i­cal ar­gu­ment that af­fect­ing long-run out­comes for civ­i­liza­tion is over­whelm­ingly im­por­tant, and at­tempt to side-step some of the deeper philo­soph­i­cal dis­agree­ments. It won’t be an ar­gu­ment that pre­vent­ing ex­tinc­tion would be over­whelm­ingly im­por­tant, but it will be an ar­gu­ment that other changes to hu­man­ity’s long-term tra­jec­tory over­whelm short-term con­sid­er­a­tions. And I’m just go­ing to stick to the moral philos­o­phy here. I will not dis­cuss im­por­tant is­sues re­lated to how to han­dle Knigh­tian un­cer­tainty, “ro­bust” prob­a­bil­ity es­ti­mates, or the long-term con­se­quences of ac­com­plish­ing good in the short run. I think those is­sues are more im­por­tant, but I’m just tak­ing on one piece of the puz­zle that has to do with moral philos­o­phy, where I thought I could quickly ex­plain some­thing that may help peo­ple think through the is­sues.

In out­line form, my ar­gu­ment is as fol­lows:

  1. In very or­di­nary re­source con­ser­va­tion cases that are easy to think about, it is clearly im­por­tant to en­sure that the lives of fu­ture gen­er­a­tions go well, and it’s nat­u­ral to think that the im­por­tance scales lin­early with the num­ber of fu­ture peo­ple whose lives will be af­fected by the con­ser­va­tion work.

  2. By anal­ogy, it is im­por­tant to en­sure that, if hu­man­ity does sur­vive into the dis­tant fu­ture, its tra­jec­tory is as good as pos­si­ble, and the im­por­tance of shap­ing the long-term fu­ture scales roughly lin­early with the ex­pected num­ber of peo­ple in the fu­ture.

  3. Premise (2), when com­bined with the stan­dard set of (ad­mit­tedly de­bat­able) em­piri­cal and de­ci­sion-the­o­retic as­sump­tions of the as­tro­nom­i­cal waste ar­gu­ment, yields the stan­dard con­clu­sion of that ar­gu­ment: shap­ing the long-term fu­ture is over­whelm­ingly im­por­tant.

As when I have dis­cussed this is­sue in other con­texts (such as Nick Bostrom’s pa­pers “Astro­nom­i­cal Waste” and “Ex­is­ten­tial Risk Preven­tion as Global Pri­or­ity,” and my dis­ser­ta­tion) this con­ver­sa­tion is go­ing to gen­er­ally as­sume that we’re talk­ing about good ac­com­plished from an im­par­tial per­spec­tive, and will not at­tend to de­on­tolog­i­cal, virtue-the­o­retic, or jus­tice-re­lated con­sid­er­a­tions.

A re­view of the as­tro­nom­i­cal waste ar­gu­ment and an ad­just­ment to it

The stan­dard ver­sion of the as­tro­nom­i­cal waste ar­gu­ment runs as fol­lows:
  1. The ex­pected size of hu­man­ity’s fu­ture in­fluence is as­tro­nom­i­cally great.

  2. If the ex­pected size of hu­man­ity’s fu­ture in­fluence is as­tro­nom­i­cally great, then the ex­pected value of the fu­ture is as­tro­nom­i­cally great.

  3. If the ex­pected value of the fu­ture is as­tro­nom­i­cally great, then what mat­ters most is that we max­i­mize hu­man­ity’s long-term po­ten­tial.

  4. Some of our ac­tions are ex­pected to re­duce ex­is­ten­tial risk in not-ridicu­lously-small ways.

  5. If what mat­ters most is that we max­i­mize hu­man­ity’s fu­ture po­ten­tial and some of our ac­tions are ex­pected to re­duce ex­is­ten­tial risk in not-ridicu­lously-small ways, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to re­duce ex­is­ten­tial risk.

  6. There­fore, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to re­duce ex­is­ten­tial risk.

I’ve ar­gued for ad­just­ing the last three steps of this ar­gu­ment in the fol­low­ing way:

4’. Some of our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory in not-ridicu­lously-small ways.

5’. If what mat­ters most is that we max­i­mize hu­man­ity’s fu­ture po­ten­tial and some of our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory in not-ridicu­lously-small ways, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory.

6’. There­fore, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory.

The ba­sic thought here is that what the as­tro­nom­i­cal waste ar­gu­ment re­ally shows is that fu­ture welfare con­sid­er­a­tions swamp short-term con­sid­er­a­tions, so that long-term con­se­quences for the dis­tant fu­ture are over­whelm­ingly im­por­tant in com­par­i­son with purely short-term con­sid­er­a­tions (apart from long-term con­se­quences that short-term con­se­quences may pro­duce).

Astro­nom­i­cal waste may in­volve changes in qual­ity of life, rather than size of population

Often, the as­tro­nom­i­cal waste ar­gu­ment is com­bined with the idea that the best way to min­i­mize as­tro­nom­i­cal waste is to min­i­mize the prob­a­bil­ity of pre-ma­ture hu­man ex­tinc­tion. How im­por­tant it is to pre­vent pre-ma­ture hu­man ex­tinc­tion is a sub­ject of philo­soph­i­cal de­bate, and the de­bate largely rests on whether it is im­por­tant to al­low large num­bers of peo­ple to ex­ist in the fu­ture. So when some­one com­plains that the as­tro­nom­i­cal waste ar­gu­ment rests on es­o­teric as­sump­tions about moral philos­o­phy, they are im­plic­itly ob­ject­ing to premise (2) or (3). They are say­ing that even if hu­man in­fluence on the fu­ture is as­tro­nom­i­cally great, maybe chang­ing how well hu­man­ity ex­er­cises its long-term po­ten­tial isn’t very im­por­tant be­cause maybe it isn’t im­por­tant to en­sure that there are a large num­ber of peo­ple liv­ing in the fu­ture.

How­ever, the con­cept of ex­is­ten­tial risk is wide enough to in­clude any dras­tic cur­tail­ment to hu­man­ity’s long-term po­ten­tial, and the con­cept of a “tra­jec­tory change” is wide enough to in­clude any small but im­por­tant change in hu­man­ity’s long-term de­vel­op­ment. And the value of these ex­is­ten­tial risks or tra­jec­tory changes need not de­pend on changes in the pop­u­la­tion. For ex­am­ple,

  • In “The Fu­ture of Hu­man Evolu­tion,” Nick Bostrom dis­cusses a sce­nario in which evolu­tion­ary dy­nam­ics re­sult in sub­stan­tial de­creases in qual­ity of for all fu­ture gen­er­a­tions, and the main prob­lem is not a pop­u­la­tion deficit.

  • Paul Chris­ti­ano out­lined long-term re­source in­equal­ity as a pos­si­ble con­se­quence of de­vel­op­ing ad­vanced ma­chine in­tel­li­gence.

  • I dis­cussed var­i­ous spe­cific tra­jec­tory changes in a com­ment on an es­say men­tioned above.

There is limited philo­soph­i­cal de­bate about the im­por­tance of changes in the qual­ity of life of fu­ture generations

The main group of peo­ple who deny that it is im­por­tant that fu­ture peo­ple ex­ist have “per­son-af­fect­ing views.” Th­ese peo­ple claim that if I must choose be­tween out­come A and out­come B, and per­son X ex­ists in out­come A but not out­come B, it’s not pos­si­ble to af­fect per­son X by choos­ing out­come A rather than B. Be­cause of this, they claim that caus­ing peo­ple to ex­ist can’t benefit them and isn’t im­por­tant. I think this view suffers from fatal ob­jec­tions which I have dis­cussed in chap­ter 4 of my dis­ser­ta­tion, and you can check that out if you want to learn more. But, for the sake of ar­gu­ment, let’s agree that cre­at­ing “ex­tra” peo­ple can’t help the peo­ple cre­ated and isn’t im­por­tant.

A puz­zle for peo­ple with per­son-af­fect­ing views goes as fol­lows:

Sup­pose that agents as a com­mu­nity have cho­sen to de­plete rather than con­serve cer­tain re­sources. The con­se­quences of that choice for the per­sons who ex­ist now or will come into ex­is­tence over the next two cen­turies will be “slightly higher” than un­der a con­ser­va­tion al­ter­na­tive (Parfit 1987, 362; see also Parfit 2011 (vol. 2), 218). There­after, how­ever, for many cen­turies the qual­ity of life would be much lower. “The great low­er­ing of the qual­ity of life must provide some moral rea­son not to choose De­ple­tion” (Parfit 1987, 363). Surely agents ought to have cho­sen con­ser­va­tion in some form or an­other in­stead. But note that, at the same time, de­ple­tion seems to harm no one. While dis­tant fu­ture per­sons, by hy­poth­e­sis, will suffer as a re­sult of de­ple­tion, it is also true that for each such per­son a con­ser­va­tion choice (very prob­a­bly) would have changed the timing and man­ner of the rele­vant con­cep­tion. That change, in turn, would have changed the iden­tities of the peo­ple con­ceived and the iden­tities of the peo­ple who even­tu­ally ex­ist. Any suffer­ing, then, that they en­dure un­der the de­ple­tion choice would seem to be un­avoid­able if those per­sons are ever to ex­ist at all. As­sum­ing (here and through­out) that that ex­is­tence is worth hav­ing, we seem forced to con­clude that de­ple­tion does not harm, or make things worse for, and is not oth­er­wise “bad for,” any­one at all (Parfit 1987, 363). At least: de­ple­tion does not harm, or make things worse for, and is not “bad for,” any­one who does or will ex­ist un­der the de­ple­tion choice.
The seem­ingly nat­u­ral thing to say if you have a per­son-af­fect­ing view is that be­cause con­ser­va­tion doesn’t benefit any­one, it isn’t im­por­tant. But this is a very strange thing to say, and peo­ple hav­ing this con­ver­sa­tion gen­er­ally rec­og­nize that say­ing it in­volves bit­ing a bul­let. The gen­eral tenor of the con­ver­sa­tion is that con­ser­va­tion is ob­vi­ously im­por­tant in this ex­am­ple, and peo­ple with per­son-af­fect­ing views need to provide an ex­pla­na­tion con­so­nant with that in­tu­ition.

What­ever the ul­ti­mate philo­soph­i­cal jus­tifi­ca­tion, I think we should say that choos­ing con­ser­va­tion in the above ex­am­ple is im­por­tant, and this has some­thing to do with the fact that choos­ing con­ser­va­tion has con­se­quences that are rele­vant to the qual­ity of life of many fu­ture peo­ple.

In­tu­itively, giv­ing N times as many fu­ture peo­ple higher qual­ity of life is N times as important

Sup­pose that con­ser­va­tion would have con­se­quences rele­vant to 100 times as many peo­ple in case A than it would in case B. How much more im­por­tant would con­ser­va­tion be in case A? In­tu­itively, it would be 100 times more im­por­tant. This gen­er­ally fits with Holden Karnofsky’s in­tu­ition that a 1/​N prob­a­bil­ity of sav­ing N lives is about as im­por­tant as sav­ing one life, for any N:
I wish to be the sort of per­son who would hap­pily pay $1 for a ro­bust (re­li­able, true, cor­rect) 10/​N prob­a­bil­ity of sav­ing N lives, for as­tro­nom­i­cally huge N—while si­mul­ta­neously re­fus­ing to pay $1 to a ran­dom per­son on the street claiming s/​he will save N lives with it.
More gen­er­ally, we could say:

Prin­ci­ple of Scale: Other things be­ing equal, it is N times bet­ter (in it­self) to en­sure that N peo­ple in some po­si­tion have higher qual­ity of life than other peo­ple who would be in their po­si­tion than it is to do this for one per­son.

I had to state the prin­ci­ple cir­cuitously to avoid say­ing that things like con­ser­va­tion pro­grams could “help” fu­ture gen­er­a­tions, be­cause ac­cord­ing to peo­ple with per­son-af­fect­ing views, if our “helping” changes the iden­tities of fu­ture peo­ple, then we aren’t “helping” any­one and that’s rele­vant. If I had said it in or­di­nary lan­guage, the prin­ci­ple would have said, “If you can help N peo­ple, that’s N times bet­ter than helping one per­son.” The prin­ci­ple could use some tin­ker­ing to deal with con­cerns about equal­ity and so on, but it will serve well enough for our pur­poses.

The Prin­ci­ple of Scale may seem ob­vi­ous, but even it would be de­bat­able. You wouldn’t find philo­soph­i­cal agree­ment about it. For ex­am­ple, some philoso­phers who claim that ad­di­tional lives have diminish­ing marginal value would claim that in situ­a­tions where many peo­ple already ex­ist, it mat­ters much less if a per­son is helped. I at­tack these per­spec­tives in chap­ter 5 of my dis­ser­ta­tion, and you can check that out if you want to learn more. But, in any case, the Prin­ci­ple of Scale does seem pretty com­pel­ling—es­pe­cially if you’re the kind of per­son that doesn’t have time for es­o­teric de­bates about pop­u­la­tion ethics—so let’s run with it.

Now for the most ques­tion­able steps: Let’s as­sume with the as­tro­nom­i­cal waste ar­gu­ment that the ex­pected num­ber of fu­ture peo­ple is over­whelming, and that it is pos­si­ble to im­prove the qual­ity of life for an over­whelming num­ber of fu­ture peo­ple through for­ward-think­ing in­ter­ven­tions. If we com­bine this with the prin­ci­ple from the last para­graph and wave our hands a bit, we get the con­clu­sion that shift­ing qual­ity of life for an over­whelming num­ber of fu­ture peo­ple is over­whelm­ingly more im­por­tant than any short term con­sid­er­a­tion. And that is very close to what the long-run per­spec­tive says about helping fu­ture gen­er­a­tions, though im­por­tantly differ­ent be­cause this ver­sion of the ar­gu­ment might not put weight on pre­vent­ing ex­tinc­tion. (I say “might not” rather than “would not” be­cause if you dis­agree with the peo­ple with per­son-af­fect­ing views but ac­cept the Prin­ci­ple of Scale out­lined above, you might just ac­cept the usual con­clu­sion of the as­tro­nom­i­cal waste ar­gu­ment.)

Does the Prin­ci­ple of Scale break down when large num­bers are at stake?

I have no ar­gu­ment that it doesn’t, but I note that (i) this wasn’t Holden Karnofsky’s in­tu­ition about sav­ing N lives, (ii) it isn’t mine, and (iii) I don’t re­ally see a com­pel­ling jus­tifi­ca­tion for it. The main rea­son I can think of for want­ing it to break down is not lik­ing the con­clu­sion that af­fect­ing long-run out­comes for hu­man­ity is over­whelm­ingly im­por­tant in com­par­i­son with short-term con­sid­er­a­tions. If you re­ally want to avoid the con­clu­sion that shap­ing the long-term fu­ture is over­whelm­ingly im­por­tant, I be­lieve it would be bet­ter to ac­com­mo­date this idea by ap­peal­ing to other per­spec­tives and a frame­work for in­te­grat­ing the in­sights of differ­ent per­spec­tives—such as the one that Holden has talked about—rather than al­ter­ing this per­spec­tive. For such peo­ple, my hope would be that read­ing this post would cause you to put more weight on the per­spec­tives that place great im­por­tance on the fu­ture.

Summary

To wrap up, I’ve ar­gued that:
  1. Re­duc­ing as­tro­nom­i­cal waste need not in­volve pre­vent­ing hu­man ex­tinc­tion—it can in­volve other changes in hu­man­ity’s long-term tra­jec­tory.

  2. While not widely dis­cussed, the Prin­ci­ple of Scale is fairly at­trac­tive from an athe­o­ret­i­cal stand­point.

  3. The Prin­ci­ple of Scale—when com­bined with other stan­dard as­sump­tions in the liter­a­ture on as­tro­nom­i­cal waste—sug­gests that some tra­jec­tory changes would be over­whelm­ingly im­por­tant in com­par­i­son with short-term con­sid­er­a­tions. It could be ac­cepted by peo­ple who have per­son-af­fect­ing views or peo­ple who don’t want to get too bogged down in es­o­teric de­bates about moral philos­o­phy.

The per­spec­tive I’ve out­lined here is still philo­soph­i­cally con­tro­ver­sial, but it is at least some­what in­de­pen­dent of the stan­dard ap­proach to as­tro­nom­i­cal waste. Ul­ti­mately, any take on as­tro­nom­i­cal waste—in­clud­ing ig­nor­ing it—will be com­mit­ted to philo­soph­i­cal as­sump­tions of some kind, but per­haps the per­spec­tive out­lined would be ac­cepted more widely, es­pe­cially by peo­ple with tem­per­a­ments con­so­nant with effec­tive al­tru­ism, than per­spec­tives rely­ing on more spe­cific the­o­ries or a larger num­ber of prin­ci­ples.