S-risk FAQ

The idea that the fu­ture might con­tain as­tro­nom­i­cal amounts of suffer­ing, and that we should work to pre­vent such worst-case out­comes, has lately at­tracted some at­ten­tion. I’ve writ­ten this FAQ to help clar­ify the con­cept and to clear up po­ten­tial mis­con­cep­tions.

[Cross­posted from my web­site on s-risks.]

Gen­eral questions

What are s-risks?

In the es­say Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing: A Ne­glected Pri­or­ity, s-risks (also called suffer­ing risks or risks of as­tro­nom­i­cal suffer­ing) are defined as “events that would bring about suffer­ing on an as­tro­nom­i­cal scale, vastly ex­ceed­ing all suffer­ing that has ex­isted on Earth so far”.

If you’re not yet fa­mil­iar with the idea, you can find out more by watch­ing Max Daniel’s EAG Bos­ton talk or by read­ing the in­tro­duc­tion to s-risks.

Can you give an ex­am­ple of what s-risks could look like?

In the fu­ture, it may be­come pos­si­ble to run such com­plex simu­la­tions that the (ar­tifi­cial) in­di­vi­d­u­als in­side these simu­la­tions are sen­tient. Nick Bostrom coined the term mind­crime for the idea that the thought pro­cesses of a su­per­in­tel­li­gent AI might cause in­trin­sic moral harm if they con­tain (suffer­ing) simu­lated per­sons. Since there are in­stru­men­tal rea­sons to run many such simu­la­tions, this could lead to vast amounts of suffer­ing. For ex­am­ple, an AI might use simu­la­tions to im­prove its knowl­edge of hu­man psy­chol­ogy or to pre­dict what hu­mans would do in a con­flict situ­a­tion.

Other com­mon ex­am­ples in­clude suffer­ing sub­rou­tines and spread­ing wild an­i­mal suffer­ing to other planets.

Isn’t all that rather far-fetched?

At first glance, one could get the im­pres­sion that s-risks are just un­founded spec­u­la­tion. But to dis­miss s-risks as unim­por­tant (in ex­pec­ta­tion), one would have to be highly con­fi­dent that their prob­a­bil­ity is neg­ligible, which is hard to jus­tify upon re­flec­tion. The in­tro­duc­tion to s-risks gives sev­eral ar­gu­ments why the prob­a­bil­ity is not neg­ligible af­ter all:

First, s-risks are dis­junc­tive. They can ma­te­ri­al­ize in any num­ber of un­re­lated ways. Gen­er­ally speak­ing, it’s hard to pre­dict the fu­ture and the range of sce­nar­ios that we can imag­ine is limited. It is there­fore plau­si­ble that un­fore­seen sce­nar­ios – known as black swans – make up a sig­nifi­cant frac­tion of s-risks. So even if any par­tic­u­lar dystopian sce­nar­ios we can con­ceive of is highly un­likely, the prob­a­bil­ity of some s-risk may still be non-neg­ligible.

Se­cond, while s-risks may seem spec­u­la­tive at first, all the un­der­ly­ing as­sump­tions are plau­si­ble. [...]

Third, his­tor­i­cal prece­dents do ex­ist. Fac­tory farm­ing, for in­stance, is struc­turally similar to (in­ci­den­tal) s-risks, albeit smaller in scale. In gen­eral, hu­man­ity has a mixed track record re­gard­ing re­spon­si­ble use of new tech­nolo­gies, so we can hardly be cer­tain that fu­ture tech­nolog­i­cal risks will be han­dled with ap­pro­pri­ate care and con­sid­er­a­tion.

Which value sys­tems should care about re­duc­ing s-risks?

Vir­tu­ally ev­ery­one would agree that (in­vol­un­tary) suffer­ing should, all else equal, be avoided. In other words, en­sur­ing that the fu­ture does not con­tain as­tro­nom­i­cal amounts of suffer­ing is a com­mon de­nom­i­na­tor of al­most all (plau­si­ble) value sys­tems.

Work on re­duc­ing s-risks is, there­fore, a good can­di­date for com­pro­mise be­tween differ­ent value sys­tems. In­stead of nar­rowly pur­su­ing our own eth­i­cal views in po­ten­tial con­flict with oth­ers, we should work to­wards a fu­ture deemed favourable by many value sys­tems.

The future

Aren’t fu­ture gen­er­a­tions in a much bet­ter po­si­tion to do some­thing about this?

Fu­ture gen­er­a­tions will prob­a­bly have more in­for­ma­tion about s-risks in gen­eral, in­clud­ing which ones are the most se­ri­ous, which does give them the up­per hand in find­ing effec­tive in­ter­ven­tions. One might, there­fore, ar­gue that later work has a sig­nifi­cantly higher marginal im­pact. How­ever, there are also ar­gu­ments for work­ing on s-risks now.

First, think­ing about s-risks only as they start to ma­te­ri­al­ize does not suffice be­cause it might be too late to do any­thing about it. Without suffi­cient fore­sight and cau­tion, so­ciety may already be “locked in” to a tra­jec­tory that ul­ti­mately leads to a bad out­come.

Se­cond, one main rea­son why fu­ture gen­er­a­tions are in a bet­ter po­si­tion is that they can draw on pre­vi­ous work. Ear­lier work – es­pe­cially re­search or con­cep­tual progress – can be effec­tive in that it al­lows fu­ture gen­er­a­tions to more effec­tively re­duce s-risk.

Third, even if fu­ture gen­er­a­tions are able to pre­vent s-risks, it’s not clear whether they will care enough to do so. We can work to en­sure this by grow­ing a move­ment of peo­ple who want to re­duce s-risks. In this re­gard, we should ex­pect ear­lier growth to be more valuable than later growth.

Fourth, if there’s a suffi­cient prob­a­bil­ity that smarter-than-hu­man AI will be built in this cen­tury, it’s pos­si­ble that we already are in a unique po­si­tion to in­fluence the fu­ture. If it’s pos­si­ble to work pro­duc­tively on AI safety now, then it should also be pos­si­ble to re­duce s-risks now.

Toby Ord’s es­say The timing of labour aimed at re­duc­ing ex­is­ten­tial risk ad­dresses the same ques­tion for efforts to re­duce x-risks. He gives two ad­di­tional rea­sons in fa­vor of ear­lier work: namely, the pos­si­bil­ity of chang­ing course (which is more valuable if done early on) and the po­ten­tial for self-im­prove­ment.

See­ing as hu­mans are (at least some­what) benev­olent and will have ad­vanced tech­nolog­i­cal solu­tions at their dis­posal, isn’t it likely that the fu­ture will be good any­way?

If you are (very) op­ti­mistic about the fu­ture, you might think that s-risks are un­likely for this rea­son (which is differ­ent from the ob­jec­tion that s-risks seem far-fetched). A com­mon ar­gu­ment is that avoid­ing suffer­ing will be­come eas­ier with more ad­vanced tech­nol­ogy; since hu­mans care at least a lit­tle bit about re­duc­ing suffer­ing, there will be less suffer­ing in the fu­ture.

While this ar­gu­ment has some merit, it’s not air­tight. By de­fault, when we hu­mans en­counter a prob­lem in need of solv­ing, we tend to im­ple­ment the most eco­nom­i­cally effi­cient solu­tion, of­ten ir­re­spec­tive of whether it in­volves large amounts of suffer­ing. Fac­tory farm­ing pro­vides a good ex­am­ple of such a mis­match; faced with the prob­lem of pro­duc­ing meat for mil­lions of peo­ple as effi­ciently as pos­si­ble, a solu­tion was im­ple­mented which hap­pened to in­volve an im­mense amount of non­hu­man suffer­ing.

Also, the fu­ture will likely con­tain vastly larger pop­u­la­tions, es­pe­cially if hu­mans colonize space at some point. All else be­ing equal, such an in­crease in pop­u­la­tion may also im­ply (vastly) more suffer­ing. Even if the frac­tion of suffer­ing de­creases, it’s not clear whether the ab­solute amount will be higher or lower.

If your pri­mary goal is to re­duce suffer­ing, then your ac­tions mat­ter less if the fu­ture will ‘au­to­mat­i­cally’ be good (be­cause the fu­ture con­tains lit­tle or no suffer­ing any­way). Given suffi­cient un­cer­tainty, this is rea­son to fo­cus on the pos­si­bil­ity of bad out­comes any­way for pre­cau­tion­ary rea­sons. In a world where s-risks are likely, we can have more im­pact.

Does it only make sense to work on s-risks if one is very pes­simistic about the fu­ture?

Although the de­gree to which we are op­ti­mistic or pes­simistic about the fu­ture is clearly rele­vant to how con­cerned we are about s-risks, one would need to be un­usu­ally op­ti­mistic about the fu­ture to rule out s-risks en­tirely.

From the in­tro­duc­tion to s-risks:

Work­ing on s-risks does not re­quire a par­tic­u­larly pes­simistic view of tech­nolog­i­cal progress and the fu­ture tra­jec­tory of hu­man­ity. To be con­cerned about s-risks, it is suffi­cient to be­lieve that the prob­a­bil­ity of a bad out­come is not neg­ligible, which is con­sis­tent with be­liev­ing that a utopian fu­ture free of suffer­ing is also quite pos­si­ble.

In other words, be­ing con­cerned about s-risks does not re­quire un­usual be­liefs about the fu­ture.

S-risks and x-risks

How do s-risks re­late to ex­is­ten­tial risks (x-risks)? Are s-risks a sub­class of x-risks?

First, re­call Nick Bostrom’s defi­ni­tion of x-risks:

Ex­is­ten­tial risk – One where an ad­verse out­come would ei­ther an­nihilate Earth-origi­nat­ing in­tel­li­gent life or per­ma­nently and dras­ti­cally cur­tail its po­ten­tial.

S-risks are defined as fol­lows:

S-risks are events that would bring about suffer­ing on an as­tro­nom­i­cal scale, vastly ex­ceed­ing all suffer­ing that has ex­isted on Earth so far.

Ac­cord­ing to these defi­ni­tions, both x-risks and s-risks re­late to shap­ing the long-term fu­ture, but re­duc­ing x-risks is about ac­tu­al­iz­ing hu­man­ity’s po­ten­tial, while re­duc­ing s-risks is about pre­vent­ing bad out­comes.

There are two pos­si­ble views on the ques­tion of whether s-risks are a sub­class of x-risks.

Ac­cord­ing to one pos­si­ble view, it’s con­ceiv­able to have as­tro­nom­i­cal amounts of suffer­ing that do not lead to ex­tinc­tion or cur­tail hu­man­ity’s po­ten­tial. We could even imag­ine that some forms of suffer­ing (such as suffer­ing sub­rou­tines) are in­stru­men­tally use­ful to hu­man civ­i­liza­tion. Hence, not all s-risks are also x-risks. In other words, some pos­si­ble fu­tures are both an x-risk and an s-risk (e.g. un­con­trol­led AI), some would be an x-risk but not an s-risk (e.g. an empty uni­verse), some would be an s-risk but not an x-risk (e.g. suffer­ing sub­rou­tines), and some are nei­ther.

S-risk?
Yes No
X-risk? Yes Un­con­trol­led AI Empty uni­verse
No Suffer­ing sub­rou­tines Utopian fu­ture

The sec­ond view is that the mean­ing of “po­ten­tial” de­pends on your val­ues. For ex­am­ple, you might think that a cos­mic fu­ture is only valuable if it does not con­tain (se­vere) suffer­ing. If “po­ten­tial” refers to the po­ten­tial of a utopian fu­ture with­out suffer­ing, then ev­ery s-risk is (by defi­ni­tion) an x-risk, too.

How do I de­cide whether re­duc­ing ex­tinc­tion risks or re­duc­ing s-risks is more im­por­tant?

This de­pends on each of us mak­ing difficult eth­i­cal judg­ment calls. The an­swer de­pends on how much you care about re­duc­ing suffer­ing ver­sus in­creas­ing hap­piness, and how you would make trade­offs be­tween the two. (This also raises fun­da­men­tal ques­tions about how hap­piness and suffer­ing can be mea­sured and com­pared.)

Pro­po­nents of suffer­ing-fo­cused ethics ar­gue that the re­duc­tion of suffer­ing is of pri­mary moral im­por­tance, and that ad­di­tional hap­piness can­not eas­ily coun­ter­bal­ance (se­vere) suffer­ing. Ac­cord­ing to this per­spec­tive, pre­vent­ing s-risks is morally most ur­gent.

Other value sys­tems, such as clas­si­cal util­i­tar­i­anism or fun the­ory, em­pha­size the cre­ation of hap­piness or other forms of pos­i­tive value, and as­sert that the vast pos­si­bil­ities of a utopian fu­ture can out­weigh s-risks. Although pre­vent­ing s-risks is still valuable in this view, it is nev­er­the­less con­sid­ered even more im­por­tant to en­sure that hu­man­ity has a cos­mic fu­ture at all by re­duc­ing ex­tinc­tion risks.

In ad­di­tion to nor­ma­tive is­sues, the an­swer also de­pends on the em­piri­cal ques­tion of how much hap­piness and suffer­ing the fu­ture will con­tain. David Althaus sug­gests that we con­sider both the nor­ma­tive suffer­ing-to-hap­piness trade ra­tio (NSR), which mea­sures how we would trade off suffer­ing and hap­piness in the­ory, and the ex­pected suffer­ing-to-hap­piness ra­tio (ESR), which mea­sures the (rel­a­tive) amounts of suffer­ing and hap­piness we ex­pect in the fu­ture.

In this frame­work, those who em­pha­size hap­piness (low NSR) or are op­ti­mistic about the fu­ture (low ESR) will tend to fo­cus on ex­tinc­tion risk re­duc­tion. If the product of NSR and ESR is high – ei­ther be­cause of a nor­ma­tive em­pha­sis on suffer­ing (high NSR) or pes­simistic views about the fu­ture (high ESR) – it’s more plau­si­ble to fo­cus on s-risk-re­duc­tion in­stead.

Miscellaneous

Is the con­cept of s-risks tied to the pos­si­bil­ity of AGI and ar­tifi­cial sen­tience?

Many s-risks, such as suffer­ing sub­rou­tines or mind­crime, have to do with ar­tifi­cial minds or smarter-than-hu­man AI. But the con­cept of s-risks is not con­cep­tu­ally de­pen­dent on the pos­si­bil­ity of AI sce­nar­ios. For ex­am­ple, spread­ing wild an­i­mal suffer­ing to other planets does not re­quire ar­tifi­cial sen­tience or AI.

Ex­am­ples of­ten in­volve ar­tifi­cial sen­tience, how­ever, due to the vast num­ber of ar­tifi­cial be­ings that could be cre­ated if ar­tifi­cial sen­tience be­comes fea­si­ble at any time in the fu­ture. Com­bined with hu­man­ity’s track record of in­suffi­cient moral con­cern for “voice­less” be­ings at our com­mand, this might pose a par­tic­u­larly se­ri­ous s-risk. (More de­tails here.)

Why would we think that ar­tifi­cial sen­tience is pos­si­ble in the first place?

This ques­tion has been dis­cussed ex­ten­sively in the philos­o­phy of mind. Many pop­u­lar the­o­ries of con­scious­ness, such as Global workspace the­ory, higher-or­der the­o­ries, or In­te­grated in­for­ma­tion the­ory, agree that ar­tifi­cial sen­tience is pos­si­ble in prin­ci­ple. Philoso­pher Daniel Den­nett puts it like this:

I’ve been ar­gu­ing for years that, yes, in prin­ci­ple it’s pos­si­ble for hu­man con­scious­ness to be re­al­ised in a ma­chine. After all, that’s what we are. We’re robots made of robots made of robots. We’re in­cred­ibly com­plex, trillions of mov­ing parts. But they’re all non-mirac­u­lous robotic parts.

As an ex­am­ple of the sort of rea­son­ing in­volved, con­sider this in­tu­itive thought ex­per­i­ment: if you were to take a sen­tient biolog­i­cal brain, and re­place one neu­ron af­ter an­other with a func­tion­ally equiv­a­lent com­puter chip, would it some­how make the brain less sen­tient? Would the brain still be sen­tient once all of its biolog­i­cal neu­rons have been re­placed? If not, at what point would it cease to be sen­tient?

The de­bate is not set­tled yet, but it seems at least plau­si­ble that ar­tifi­cial sen­tience is pos­si­ble in prin­ci­ple. Also, we don’t need to be cer­tain to jus­tify moral con­cern. It’s suffi­cient that we can’t rule it out.

Ok, I’m sold. What can I per­son­ally do to help re­duce s-risks?

A sim­ple first step is to join the dis­cus­sion, e.g. in this Face­book group. If more peo­ple think and write about the topic (ei­ther in­de­pen­dently or at EA or­ga­ni­za­tions), we’ll make progress on the cru­cial ques­tion of how to best re­duce s-risks. At the same time, it helps build a com­mu­nity that, in turn, can get even more peo­ple in­volved.

If you’re in­ter­ested in do­ing se­ri­ous re­search on s-risks right away, you could have a look at this list of open ques­tions to find a suit­able re­search topic. Work in AI policy and strat­egy is an­other in­ter­est­ing op­tion, as progress in this area al­lows us to shape AI in a more fine-grained way, mak­ing it eas­ier to iden­tify and im­ple­ment safety mea­sures against s-risks.

Another pos­si­bil­ity is to donate to or­ga­ni­za­tions work­ing on s-risks re­duc­tion. Cur­rently, the Foun­da­tional Re­search In­sti­tute is the only group with an ex­plicit fo­cus on s-risks, but other groups also con­tribute to solv­ing is­sues that are rele­vant for s-risk re­duc­tion. For ex­am­ple, the Ma­chine In­tel­li­gence Re­search In­sti­tute aims to en­sure that smarter-than-hu­man ar­tifi­cial in­tel­li­gence is al­igned with hu­man val­ues, which prob­a­bly also re­duces s-risks. Char­i­ties that pro­mote broad so­cietal im­prove­ments such as bet­ter in­ter­na­tional co­op­er­a­tion or benefi­cial val­ues may also con­tribute to s-risk re­duc­tion, albeit in a less tar­geted way.

[Dis­claimer: I’m in close con­tact with the Foun­da­tional Re­search In­sti­tute, but I am not em­ployed there and don’t re­ceive any fi­nan­cial com­pen­sa­tion.]