Existential risk as common cause

Sum­mary: Why many differ­ent wor­ld­views should pri­ori­tise re­duc­ing ex­is­ten­tial risk. Also an ex­haus­tive list of peo­ple who can ig­nore this ar­gu­ment. (Wri­teup of an old ar­gu­ment I can’t find a source for.)

Con­fi­dence: 70%.

Cross­posted from gleech.org.

---

Imag­ine some­one who thought that art was the only thing that made life worth liv­ing. [1] What should they do? Binge on galleries?

Work to in­crease the amount of art and artis­tic ex­pe­rience, by go­ing into fi­nance to fund artists? Or by be­com­ing an ac­tivist for gov­ern­ment fund­ing for the arts? Maybe. But there’s a case that they should pay at­ten­tion to ways the world might end: af­ter all, you can’t en­joy art if we’re all dead.

1. Aes­thetic ex­pe­rience is good in it­self: it’s a ter­mi­nal goal.
2. The ex­tinc­tion of life would de­stroy all aes­thetic ex­pe­rience & pre­vent fu­ture ex­pe­riences.
3. So re­duc­ing ex­is­ten­tial risk is good, if only to pro­tect the con­di­tions for aes­thetic ex­pe­rience.

And this gen­er­al­ises to a huge range of val­ues:

1. [good] is good in it­self: it’s a ter­mi­nal goal.
2. The ex­tinc­tion of life would de­stroy [good], and pre­vent fu­ture [good].
3. So re­duc­ing ex­is­ten­tial risk is good, if only to pro­tect the con­di­tions for [good].

Cas­par Oester­held gives a few ex­am­ples of what peo­ple can plug into those brack­ets:

Abun­dance, achieve­ment, ad­ven­ture, af­fili­a­tion, al­tru­ism, ap­atheia, art, as­ceti­cism, aus­ter­ity, au­tarky, au­thor­ity, au­ton­omy, beauty, benev­olence, bod­ily in­tegrity, challenge, col­lec­tive prop­erty, com­mem­o­ra­tion, com­mu­nism, com­mu­nity, com­pas­sion, com­pe­tence, com­pe­ti­tion, com­pet­i­tive­ness, com­plex­ity, com­radery, con­scien­tious­ness, con­scious­ness, con­tent­ment, co­op­er­a­tion, courage, [crab-men­tal­ity], cre­ativity, crime, crit­i­cal think­ing, cu­ri­os­ity, democ­racy, de­ter­mi­na­tion, dig­nity, dili­gence, dis­ci­pline, di­ver­sity, du­ties, ed­u­ca­tion, emo­tion, envy, equal­ity, equa­nim­ity, ex­cel­lence, ex­cite­ment, ex­pe­rience, fair­ness, faith­ful­ness, fam­ily, for­ti­tude, frank­ness, free will, free­dom, friend­ship, fru­gal­ity, fulfill­ment, fun, good in­ten­tions, greed, hap­piness, har­mony, health, hon­esty, honor, hu­mil­ity, ideal­ism, idol­a­try, imag­i­na­tion, im­prove­ment, in­cor­rupt­ibil­ity, in­di­vi­d­u­al­ity, in­dus­tri­ous­ness, in­tel­li­gence, jus­tice, knowl­edge, law abidance, life, love, loy­alty, mod­esty, monogamy, mu­tual af­fec­tion, na­ture, nov­elty, obe­di­ence, open­ness, op­ti­mism, or­der, or­ga­ni­za­tion, pain, par­si­mony, peace, peace of mind, pity, play, pop­u­la­tion size, prefer­ence fulfill­ment, pri­vacy, progress, promises, prop­erty, pros­per­ity, punc­tu­al­ity, pun­ish­ment, pu­rity, racism, ra­tio­nal­ity, re­li­a­bil­ity, re­li­gion, re­spect, re­straint, rights, sad­ness, safety, sanc­tity, se­cu­rity, self-con­trol, self-de­nial, self-de­ter­mi­na­tion, self-ex­pres­sion, self-pity, sim­plic­ity, sincer­ity, so­cial par­a­sitism, so­ciety, spiritu­al­ity, sta­bil­ity, straight­for­ward­ness, strength, striv­ing, sub­or­di­na­tion, suffer­ing, sur­prise, tech­nol­ogy, tem­per­ance, thought, tol­er­ance, tough­ness, truth, tra­di­tion, trans­parency, valor, va­ri­ety, ve­rac­ity, wealth, welfare, wis­dom.

So “from a huge va­ri­ety of view­points, the end of the world is bad”? What a rev­e­la­tion!

: the above is only in­ter­est­ing if we get from “it’s good to re­duce x-risk” to “it’s the most im­por­tant thing to do” for these val­ues. This would be the case if ex­tinc­tion was both 1) rel­a­tively likely rel­a­tively soon, and 2) we could do some­thing about it. We can’t be that con­fi­dent of ei­ther of these things, but there are good rea­sons to both worry and plan.

(If you think that we can only be rad­i­cally un­cer­tain about the fu­ture, note that this im­plies you should de­vote more at­ten­tion to the worst sce­nar­ios, not less: ‘high un­cer­tainty’ is not the same as ‘low prob­a­bil­ity’.)

It’s hard to say at what pre­cise level of con­fi­dence and dis­count rate this ar­gu­ment over­rides di­rect pro­mo­tion of [good]; I’m claiming that it’s im­plau­si­ble that your one life­time of di­rect pro­mo­tion would out­weigh all fu­ture in­stances, if you’re a con­se­quen­tial­ist and place rea­son­able weigh on fu­ture lives.

When I first wrote this, I thought the ar­gu­ment had more force for peo­ple with high moral un­cer­tainty - i.e. the more of Oester­held’s list you think are plau­si­bly ac­tu­ally ter­mi­nal goods, the more you’d fo­cus on x-risk. But I don’t think that fol­lows, and any­way there are much stronger kinds of un­cer­tainty, in­volv­ing not just which ter­mi­nal val­ues you credit, but whether there are moral prop­er­ties at all, whether max­imi­sa­tion is im­per­a­tive, whether pro­mo­tion or hon­our­ing counts as good. The above ar­gu­ment is about goal-in­de­pen­dence (within con­se­quen­tial­ism), and says noth­ing about frame­work-in­de­pen­dence. So:


Who doesn’t have to work on re­duc­ing x-risk?

* Peo­ple with in­cred­ibly high con­fi­dence that noth­ing can be done to af­fect ex­tinc­tion (that is, well above 99% con­fi­dence).

* Avowed ego­ists. (Though Scheffler ar­gues that even they have to care here.)

* ‘Parochial­ists’: Peo­ple who think that the re­spon­si­bil­ity to help those you’re close to out­weighs your re­spon­si­bil­ity to any num­ber of dis­tant oth­ers.

* Peo­ple with val­ues that don’t de­pend on the world:

* Nihilists, or other peo­ple who think there are no moral prop­er­ties.
* Peo­ple with an ‘hon­our­ing’ kind of ethics—like Kan­ti­ans, Aris­totelians, or some re­li­gions. Philip Pet­tit makes a helpful dis­tinc­tion: when you act, you can ei­ther ‘honor’ a value (di­rectly in­stan­ti­ate it) or ‘pro­mote’ it (make more op­por­tu­ni­ties for it, make it more likely in fu­ture). This is a key differ­ence be­tween con­se­quen­tial­ism and two of the other big moral the­o­ries (de­on­tol­ogy and virtue ethics): the lat­ter two only value hon­our­ing. This could get them off the log­i­cal hook be­cause, un­less “pre­vent­ing ex­tinc­tion” was a duty or virtue it­self, or fit eas­ily into an­other duty or virtue, there’s no moral force against it. (You could try to con­strue re­duc­ing x-risk as “care for oth­ers” or “gen­eros­ity”.) [2]


* Peo­ple that dis­value life:

* Ab­solute nega­tive util­i­tar­i­ans or anti­na­tal­ists: peo­ple who think that life is gen­er­ally nega­tive in it­self.
* Peo­ple who think that hu­man life has, and will con­tinue to have, net-nega­tive effects. Of course, a deep ecol­o­gist who sided with ex­tinc­tion would be hop­ing for a hor­ren­dously nar­row event, be­tween ‘one which ends all hu­man life’ and ‘one which ends all life’. They’d still have to work against the lat­ter, which cov­ers the ar­tifi­cial x-risks.
* Or­di­nary util­i­tar­i­ans might also be com­mit­ted to this view, in cer­tain ter­rible con­tin­gen­cies (e.g. if we in­ex­orably in­creased the num­ber of suffer­ing be­ings via colon­i­sa­tion or simu­la­tion).


* The end of the world is not the worst sce­nario: you might in­stead have a world with uni­mag­in­able amounts of suffer­ing last­ing a very long time, an ‘S-risk’. You might work on those in­stead. This strikes me as ad­mirable and im­por­tant, it just doesn’t have the com­plete value-in­de­pen­dence that im­pressed me about the ar­gu­ment at the start of this piece.

* Peo­ple who don’t think that prob­a­bil­ity es­ti­mates or ex­pected value should be used for moral de­ci­sions. (‘In­tu­ition­ists’.)

* You might be ‘satis­fic­ing’ - you might view the Good as a piece­wise func­tion, where hav­ing some amount of the good is vi­tally im­por­tant, but any more than that has no moral sig­nifi­cance. This seems more im­plau­si­ble than max­imi­sa­tion.


Uncertainties

* We re­ally don’t know how tractable these risks are: we haven’t acted, as a species, on un­prece­dented cen­tury-long pro­jects with liter­ally only one chance for suc­cess. (But again, this un­cer­tainty doesn’t li­cence in­ac­tivity, be­cause the down­side is so large.)

* I pre­vi­ously had the fol­low­ing ex­empted:

Peo­ple with in­cred­ibly high con­fi­dence that ex­tinc­tion will not hap­pen (that is, well above 99% con­fi­dence). This is much higher con­fi­dence than most peo­ple who have looked hard at the mat­ter.

But Ord ar­gues that these peo­ple ac­tu­ally should pri­ori­tise x-risk, since ex­tinc­tion be­ing very hard im­plies a long fu­ture, and so much greater fu­ture ex­pected value. It’s not clear what as­sump­tions his model makes, be­sides low dis­count rate and at least min­i­mal re­turns to x-risk re­duc­tion. (h/​t makaea)


* There is some chance that our fu­ture will be nega­tive—es­pe­cially if we spread nor­mal ecosys­tems to other planets, or if hy­per-de­tailed simu­la­tions of peo­ple turn out to have moral weight. If the risk in­creased (if the moral cir­cle stopped ex­pand­ing, if re­search into phe­nom­e­nal con­scious­ness and moral weight stag­nated), these could ‘flip the sign’ on ex­tinc­tion, for me.

* I was go­ing to add ‘per­son-af­fect­ing’ peo­ple to the ex­emp­tion list. But ac­tu­ally if the prob­a­bil­ity of ex­tinc­tion in the next 80 years (one life­time) is high enough (1% ?) then they prob­a­bly have to act too, even de­spite ig­nor­ing fu­ture gen­er­a­tions.

* Most peo­ple are nei­ther tech­ni­cal re­searchers nor will­ing to go into gov­ern­ment. So, if x-risk or­gani­sa­tion ran out of “room for more fund­ing” then most peo­ple would be off the hook (back to max­imis­ing their ter­mi­nal goal di­rectly), un­til they had some.

* We don’t re­ally know how com­mon real de­on­tol­o­gists are. (That one study is n=1000 about Swe­den, prob­a­bly an un­usu­ally con­se­quen­tial­ist place.) As value-hon­our­ers, they can maybe duck most of the force of the ar­gu­ment.

* Con­ver­gence, for in­stance the above ar­gu­ment, is of­ten sus­pi­cious, when hu­mans are per­suad­ing them­selves or oth­ers.

---

[1]: For ex­am­ple, Niet­zsche saidWithout mu­sic, life would be a mis­take.’ (Though strictly this is bluster: he cer­tainly val­ued many other things.)

[2]: Pum­mer claims that all “min­i­mally plau­si­ble” ver­sions of the hon­our­ing ethics must in­clude some pro­mo­tion. But I don’t see how they can, with­out be­ing just rule-util­i­tar­i­ans in dis­guise.

EDIT 8/​12/​18: For­mat­ting. Also added Ord’s haz­ard rate ar­gu­ment, h/​t makaea.