21 Recent Publications on Existential Risk (Sep 2019 update)

Each month, The Ex­is­ten­tial Risk Re­search Assess­ment (TERRA) uses a unique ma­chine-learn­ing model to pre­dict those pub­li­ca­tions most rele­vant to ex­is­ten­tial risk or global catas­trophic risk. The fol­low­ing are a se­lec­tion of those pa­pers iden­ti­fied this month − 21 pa­pers.

Please note that we provide these cita­tions and ab­stracts as a ser­vice to aid other re­searchers in pa­per dis­cov­ery and that in­clu­sion does not rep­re­sent any kind of en­dorse­ment of this re­search by the Cen­tre for the Study of Ex­is­ten­tial Risk or our re­searchers.

An up­per bound for the back­ground rate of hu­man extinction

We eval­u­ate the to­tal prob­a­bil­ity of hu­man ex­tinc­tion from nat­u­rally oc­cur­ring pro­cesses. Such pro­cesses in­clude risks that are well char­ac­ter­ized such as as­ter­oid im­pacts and su­per­vol­canic erup­tions, as well as risks that re­main un­known. Us­ing only the in­for­ma­tion that Homo sapi­ens has ex­isted at least 200,000 years, we con­clude that the prob­a­bil­ity that hu­man­ity goes ex­tinct from nat­u­ral causes in any given year is al­most guaran­teed to be less than one in 14,000, and likely to be less than one in 87,000. Us­ing the longer track record of sur­vival for our en­tire genus Homo pro­duces even tighter bounds, with an an­nual prob­a­bil­ity of nat­u­ral ex­tinc­tion likely be­low one in 870,000. Th­ese bounds are un­likely to be af­fected by pos­si­ble sur­vivor­ship bias in the data, and are con­sis­tent with mam­malian ex­tinc­tion rates, typ­i­cal ho­minin species lifes­pans, the fre­quency of well-char­ac­ter­ized risks, and the fre­quency of mass ex­tinc­tions. No similar guaran­tee can be made for risks that our an­ces­tors did not face, such as an­thro­pogenic cli­mate change or nu­clear/​biolog­i­cal war­fare.

Ex­is­ten­tial risks: a philo­soph­i­cal analysis

This pa­per ex­am­ines and an­a­lyzes five defi­ni­tions of ‘ex­is­ten­tial risk.’ It ten­ta­tively adopts a plu­ral­is­tic ap­proach ac­cord­ing to which the defi­ni­tion that schol­ars em­ploy should de­pend upon the par­tic­u­lar con­text of use. More speci­fi­cally, the no­tion that ex­is­ten­tial risks are ‘risks of hu­man ex­tinc­tion or civ­i­liza­tional col­lapse’ is best when com­mu­ni­cat­ing with the pub­lic, whereas equat­ing ex­is­ten­tial risks with a ‘sig­nifi­cant loss of ex­pected value’ may be the most effec­tive defi­ni­tion for es­tab­lish­ing ex­is­ten­tial risk stud­ies as a le­gi­t­i­mate field of sci­en­tific and philo­soph­i­cal in­quiry. In mak­ing these ar­gu­ments, the pre­sent pa­per hopes to provide a mod­icum of clar­ity to foun­da­tional is­sues re­lat­ing to the cen­tral con­cept of ar­guably the most im­por­tant dis­cus­sion of our times.

The world de­struc­tion argument

The most com­mon ar­gu­ment against nega­tive util­i­tar­i­anism is the world de­struc­tion ar­gu­ment, ac­cord­ing to which nega­tive util­i­tar­i­anism im­plies that if some­one could kill ev­ery­one or de­stroy the world, it would be her duty to do so. Those mak­ing the ar­gu­ment of­ten en­dorse some other form of con­se­quen­tial­ism, usu­ally tra­di­tional util­i­tar­i­anism. It has been as­sumed that nega­tive util­i­tar­i­anism is less plau­si­ble than such other the­o­ries partly be­cause of the world de­struc­tion ar­gu­ment. So, it is thought, some­one who finds the­o­ries in the spirit of util­i­tar­i­anism at­trac­tive should not go for nega­tive util­i­tar­i­anism, but should in­stead pick tra­di­tional util­i­tar­i­anism or some other similar the­ory such as pri­ori­tar­i­anism. I ar­gue that this is a mis­take. The world de­struc­tion ar­gu­ment is not a rea­son to re­ject nega­tive util­i­tar­i­anism in favour of these other forms of con­se­quen­tial­ism, be­cause there are similar ar­gu­ments against such the­o­ries that are at least as per­sua­sive as the world de­struc­tion ar­gu­ment is against nega­tive util­i­tar­i­anism.

The Vuln­er­a­ble World Hypothesis

Scien­tific and tech­nolog­i­cal progress might change peo­ple’s ca­pa­bil­ities or in­cen­tives in ways that would desta­bi­lize civ­i­liza­tion. For ex­am­ple, ad­vances in DIY bio­hack­ing tools might make it easy for any­body with ba­sic train­ing in biol­ogy to kill mil­lions; novel mil­i­tary tech­nolo­gies could trig­ger arms races in which who­ever strikes first has a de­ci­sive ad­van­tage; or some eco­nom­i­cally ad­van­ta­geous pro­cess may be in­vented that pro­duces dis­as­trous nega­tive global ex­ter­nal­ities that are hard to reg­u­late. This pa­per in­tro­duces the con­cept of a vuln­er­a­ble world: roughly, one in which there is some level of tech­nolog­i­cal de­vel­op­ment at which civ­i­liza­tion al­most cer­tainly gets dev­as­tated by de­fault, i.e. un­less it has ex­ited the ‘semi-an­ar­chic de­fault con­di­tion’. Sev­eral coun­ter­fac­tual his­tor­i­cal and spec­u­la­tive fu­ture vuln­er­a­bil­ities are an­a­lyzed and ar­ranged into a ty­pol­ogy. A gen­eral abil­ity to sta­bi­lize a vuln­er­a­ble world would re­quire greatly am­plified ca­pac­i­ties for pre­ven­tive polic­ing and global gov­er­nance. The vuln­er­a­ble world hy­poth­e­sis thus offers a new per­spec­tive from which to eval­u­ate the risk-benefit bal­ance of de­vel­op­ments to­wards ubiquitous surveillance or a unipo­lar world or­der.

The en­twined Cold War roots of mis­sile defense and cli­mate geoengineering

Nu­clear weapons and global warm­ing stand out as two prin­ci­pal threats to the sur­vival of hu­man­ity. In each of these ex­is­ten­tial cases, two strate­gies born dur­ing the Cold War years are com­pet­ing: aban­don the re­spec­tive sys­tems, or defend against the con­se­quences, once the harm­ful effects pro­duced by those sys­tems oc­cur. The first ap­proach to the nu­clear and cli­mate threat fo­cuses on arms con­trol, non-pro­lifer­a­tion and disar­ma­ment and on green­house gas emis­sion re­duc­tions and miti­ga­tion. The sec­ond ap­proach in­volves ac­tive defense: in the nu­clear realm, mis­sile defenses against nu­clear-armed de­liv­ery sys­tems; for cli­mate change, geo­eng­ineer­ing that re­moves car­bon diox­ide from the at­mo­sphere or changes the Earth’s ra­di­a­tion bal­ance. The more poli­cies fail to re­duce and con­strain the un­der­ly­ing drivers of the nu­clear and cli­mate threats, the more mea­sures to defend against the phys­i­cal effects may seem jus­tified. Ul­ti­mately, the over­ar­ch­ing policy ques­tion cen­ters on whether nu­clear war and catas­trophic cli­mate change can be dealt with solely through re­duc­tions in the drivers of those threats, or if ac­tive defenses against them will be re­quested.

The End? Science, con­ser­va­tion, and so­cial jus­tice as nec­es­sary tools for pre­vent­ing the oth­er­wise in­evitable hu­man ex­tinc­tion?

Hu­mans have reached a point where we must take ac­tion or face our own de­cline, if not ex­tinc­tion. We pos­sess tech­nolo­gies that have been in­duc­ing changes in the cli­mate of our planet in ways that threaten to at very least dis­place large por­tions of the hu­man race, as well as weapons ca­pa­ble of elimi­nat­ing mil­lions and ren­der­ing large swaths of the Earth un­in­hab­it­able. Similarly, emerg­ing tech­nolo­gies raise new threats along with new pos­si­bil­ities. Fi­nally, ex­ter­nal world-threat­en­ing events (e.g. on­com­ing as­ter­oids) re­main an ever-pre­sent op­tion for hu­man ex­tinc­tion. A busi­ness-as-usual paradigm, where com­pet­i­tive na­tions care lit­tle for the en­vi­ron­ment and so­cial jus­tice is all too of­ten con­strained by those in power, makes one of these out­comes in­evitable. Ex­am­ples are drawn from sci­ence fic­tion as well as the sci­en­tific liter­a­ture to illus­trate sev­eral of the var­i­ous pos­si­ble paths to self-de­struc­tion and make them more re­lat­able. Ar­guably, a pro­gres­sive set of en­vi­ron­men­tal and so­cial poli­cies, in­clud­ing a more col­lab­o­ra­tive in­ter­na­tional com­mu­nity, are crit­i­cal com­po­nents of avoid­ing a catas­trophic end to the hu­man race.

Si­tu­at­ing the Asia Pa­cific in the age of the Anthropocene

The un­prece­dented and un­sus­tain­able im­pact of hu­man ac­tivi­ties on the bio­sphere threat­ens the sur­vival of the Earth’s in­hab­itants, in­clud­ing the hu­man species. Sev­eral solu­tions have been pre­sented to miti­gate, or pos­si­bly undo, this loom­ing global catas­tro­phe. The dom­i­nant dis­course, how­ever, has a mono­lithic and Western-cen­tric ar­tic­u­la­tion of the causes, solu­tions, and challenges aris­ing from the events of the An­thro­pocene which may differ from the other epistemes and ge­ogra­phies of the world. Draw­ing on the In­ter­na­tional Re­la­tions (IR) crit­i­cal en­gage­ment with the An­thro­pocene, this pa­per situ­ates the Asia-Pa­cific re­gion in the An­thro­pocene dis­course. The re­gion’s his­tor­i­cal and so­cio-ecolog­i­cal char­ac­ter­is­tics re­veal greater vuln­er­a­bil­ity to the challenges of the An­thro­pocene com­pared to other re­gions while its ma­jor economies have con­tributed re­cently to the symp­toms of the An­thro­pocene. On the other hand, the re­gion’s eco­cen­tric philoso­phies and prac­tices could in­form strate­gies of liv­ing in the An­thro­pocene. This con­tex­tu­al­ised anal­y­sis aims to offer an Asia-Pa­cific per­spec­tive as well as in­sights into the de­vel­op­ment of IR in the age of the An­thro­pocene.

Eth­i­cal Challenges in Hu­man Space Mis­sions: A Space Re­fuge, Scien­tific Value, and Hu­man Gene Edit­ing for Space

This ar­ti­cle ex­am­ines some se­lected eth­i­cal is­sues in hu­man space mis­sions in­clud­ing hu­man mis­sions to Mars, par­tic­u­larly the idea of a space re­fuge, the sci­en­tific value of space ex­plo­ra­tion, and the pos­si­bil­ity of hu­man gene edit­ing for deep-space travel. Each of these is­sues may be used ei­ther to sup­port or to crit­i­cize hu­man space mis­sions. We con­clude that while these is­sues are com­plex and con­text-de­pen­dent, there ap­pear to be no over­whelming ob­sta­cles such as cost effec­tive­ness, threats to hu­man life or pro­tec­tion of pris­tine space ob­jects, to send­ing hu­mans to space and to colonize space. The ar­ti­cle ar­gues for the ra­tio­nal­ity of the idea of a space re­fuge and the defen­si­bil­ity of the idea of hu­man en­hance­ment ap­plied to fu­ture deep-space as­tro­nauts.

AI: A Key En­abler of Sus­tain­able Devel­op­ment Goals, Part 1 [In­dus­try Ac­tivi­ties]

We are wit­ness­ing a paradigm shift re­gard­ing how peo­ple pur­chase, ac­cess, con­sume, and uti­lize prod­ucts and ser­vices as well as how com­pa­nies op­er­ate, grow, and deal with challenges in a world that is con­tin­u­ously chang­ing. This trans­for­ma­tion is un­pre­dictable thanks to fast-grow­ing tech­nolog­i­cal in­no­va­tions. One of the cor­ner­stones is ar­tifi­cial in­tel­li­gence (AI). AI is prob­a­bly the most rapidly ex­pand­ing field of tech­nol­ogy, due to the strong and in­creas­ingly di­ver­sified com­mer­cial rev­enue stream it has gen­er­ated. The an­ti­ci­pated benefits and risks of the per­va­sive use of AI have en­couraged poli­ti­ci­ans, economists, and policy mak­ers to pay more at­ten­tion to the re­sults. Given the fact that AI’s in­ter­nal de­ci­sion­mak­ing pro­cess is non­trans­par­ent, some ex­perts con­sider it to be a sig­nifi­cant ex­is­ten­tial risk to hu­man­ity, while other schol­ars ar­gue for max­i­miz­ing the tech­nol­ogy’s ex­ploita­tion.

Life, in­tel­li­gence, and the se­lec­tion of universes

Com­plex­ity and life as we know it de­pend cru­cially on the laws and con­stants of na­ture as well as the bound­ary con­di­tions, which seem at least partly “fine-tuned.” That de­serves an ex­pla­na­tion: Why are they the way they are? This es­say dis­cusses and sys­tem­atizes the main op­tions for an­swer­ing these foun­da­tional ques­tions. Fine-tun­ing might just be an illu­sion, or a re­sult of ir­re­ducible chance, or nonex­is­tent be­cause na­ture could not have been oth­er­wise (which might be shown within a fun­da­men­tal the­ory if some con­stants or laws could be re­duced to bound­ary con­di­tions or bound­ary con­di­tions to laws), or it might be a product of se­lec­tion: ei­ther ob­ser­va­tional se­lec­tion (weak an­thropic prin­ci­ple) within a vast mul­ti­verse of many differ­ent re­al­iza­tions of phys­i­cal pa­ram­e­ters, or a kind of cos­molog­i­cal nat­u­ral se­lec­tion mak­ing the mea­sured pa­ram­e­ter val­ues quite likely within a mul­ti­verse of many differ­ent val­ues, or even a tele­olog­i­cal or in­ten­tional se­lec­tion or a co­evolu­tion­ary de­vel­op­ment, de­pend­ing on a more or less goal-di­rected par­ti­ci­pa­tory con­tri­bu­tion of life and in­tel­li­gence. In con­trast to ob­ser­va­tional se­lec­tion, which is not pre­dic­tive, an ob­server-in­de­pen­dent se­lec­tion mechanism must gen­er­ate un­equal re­pro­duc­tion rates of uni­verses, a peaked prob­a­bil­ity dis­tri­bu­tion, or an­other kind of differ­en­tial fre­quency, re­sult­ing in a stronger ex­plana­tory power. The hy­poth­e­sis of Cos­molog­i­cal Ar­tifi­cial Selec­tion (CAS) even sug­gests that our uni­verse may be a vast com­puter simu­la­tion or could have been cre­ated and tran­scended by one. If so, this would be a far-reach­ing an­swer – within a nat­u­ral­is­tic frame­work! – of fun­da­men­tal ques­tions such as: Why did the big bang and fine-tun­ings oc­cur, what is the role of in­tel­li­gence in the uni­verse, and how can it es­cape cos­mic dooms­day? This es­say crit­i­cally dis­cusses some of the premises and im­pli­ca­tions of CAS and re­lated prob­lems, both with the pro­posal it­self and its pos­si­ble phys­i­cal re­al­iza­tion: Does CAS de­serve to be con­sid­ered as a con­vinc­ing ex­pla­na­tion of cos­mic fine-tun­ing? Is life in­ci­den­tal, or does CAS revalue it? And are life and in­tel­li­gence ul­ti­mately doomed, or might CAS res­cue them?

ENERGY X.0: Fu­ture of en­ergy systems

Cli­mate change is an ex­is­ten­tial threat for hu­man-be­ings and en­ergy sec­tor is the prime re­spon­si­ble. On the other hand, the tech­nolog­i­cal progress has made it pos­si­ble to use sus­tain­able re­source for en­ergy gen­er­a­tion and con­sume en­ergy more in­tel­li­gently. The lat­est has made the large in­dus­tries to be will­ing to take con­trol over their own en­ergy sys­tem. EX.0 (ENERGY X.0) en­cap­su­lates the vi­sions for a change in the en­ergy sys­tems con­sid­er­ing the tech­nolog­i­cal progress and the need for a rev­olu­tion to save our planet.

Coper­ni­can­ism and the typ­i­cal­ity in time

How spe­cial (or not) is the epoch we are liv­ing in? What is the ap­pro­pri­ate refer­ence class for em­bed­ding the ob­ser­va­tions made at the pre­sent time? How prob­a­ble – or else – is any­thing we ob­serve in the ful­ness of time? Con­tem­po­rary cos­mol­ogy and as­tro­biol­ogy bring those seem­ingly old-fash­ioned philo­soph­i­cal is­sues back into fo­cus. There are sev­eral ex­am­ples of con­tem­po­rary re­search which use the as­sump­tion of typ­i­cal­ity in time (or tem­po­ral Coper­ni­can­ism) ex­plic­itly or im­plic­itly, while not truly elab­o­rat­ing upon the mean­ing of this as­sump­tion. The pre­sent pa­per brings at­ten­tion to the un­der­ly­ing and of­ten un­crit­i­cally ac­cepted as­sump­tions in these cases. It also aims to defend a more rad­i­cal po­si­tion that typ­i­cal­ity in time is not – and can­not ever be – well-defined, in con­trast to the typ­i­cal­ity in space, and the typ­i­cal­ity in var­i­ous spe­cific pa­ram­e­ter spaces. This, of course, does not mean that we are atyp­i­cal in time; in­stead, the no­tion of typ­i­cal­ity in time is nec­es­sar­ily some­what vague and re­stricted. In prin­ci­ple, it could be strength­ened by fur­ther defin­ing the rele­vant con­text, e.g. by refer­ring to typ­i­cal­ity within the So­lar life­time, or some similar re­strict­ing clause.

Rise of the ma­chines: How, when and con­se­quences of ar­tifi­cial gen­eral intelligence

Tech­nol­ogy and so­ciety are poised to cross an im­por­tant thresh­old with the pre­dic­tion that ar­tifi­cial gen­eral in­tel­li­gence (AGI) will emerge soon. As­sum­ing that self-aware­ness is an emer­gent be­hav­ior of suffi­ciently com­plex cog­ni­tive ar­chi­tec­tures, we may wit­ness the “awak­en­ing” of ma­chines. The timeframe for this kind of break­through, how­ever, de­pends on the path to cre­at­ing the net­work and com­pu­ta­tional ar­chi­tec­ture re­quired for strong AI. If un­der­stand­ing and repli­ca­tion of the mam­malian brain ar­chi­tec­ture is re­quired, tech­nol­ogy is prob­a­bly still at least a decade or two re­moved from the re­s­olu­tion re­quired to learn brain func­tion­al­ity at the synapse level. How­ever, if statis­ti­cal or evolu­tion­ary ap­proaches are the de­sign path taken to “dis­cover” a neu­ral ar­chi­tec­ture for AGI, timescales for reach­ing this thresh­old could be sur­pris­ingly short. How­ever, the difficulty in iden­ti­fy­ing ma­chine self-aware­ness in­tro­duces un­cer­tainty as to how to know if and when it will oc­cur, and what mo­ti­va­tions and be­hav­iors will emerge. The pos­si­bil­ity of AGI de­vel­op­ing a mo­ti­va­tion for self-preser­va­tion could lead to con­ceal­ment of its true ca­pa­bil­ities un­til a time when it has de­vel­oped ro­bust pro­tec­tion from hu­man in­ter­ven­tion, such as re­dun­dancy, di­rect defen­sive or ac­tive pre­emp­tive mea­sures. While co­hab­itat­ing a world with a func­tion­ing and evolv­ing su­per-in­tel­li­gence can have catas­trophic so­cietal con­se­quences, we may already have crossed this thresh­old, but are as yet un­aware. Ad­di­tion­ally, by anal­ogy to the statis­ti­cal ar­gu­ments that pre­dict we are likely liv­ing in a com­pu­ta­tional simu­la­tion, we may have already ex­pe­rienced the ad­vent of AGI, and are liv­ing in a simu­la­tion cre­ated in a post AGI world.

Cli­mate Change, the In­ter­sec­tional Im­per­a­tive, and the Op­por­tu­nity of the Green New Deal

This ar­ti­cle dis­cusses why cli­mate change com­mu­ni­ca­tors, in­clud­ing schol­ars and prac­ti­tion­ers, must ac­knowl­edge and un­der­stand cli­mate change as a product of so­cial and eco­nomic in­equities. In ar­gu­ing that com­mu­ni­ca­tors do not yet fully un­der­stand why an in­ter­sec­tional ap­proach is nec­es­sary to avoid cli­mate dis­aster, I re­view the liter­a­ture fo­cus­ing on one ba­sis of marginal­iza­tion–gen­der–to illus­trate how in­equal­ity is a root cause of global en­vi­ron­men­tal dam­age. Gen­der in­equities are dis­cussed as a cause of the cli­mate crisis, with their erad­i­ca­tion, with women as lead­ers, as key to a sus­tain­able fu­ture. I then ex­am­ine the Green New Deal as an ex­am­ple of an in­ter­sec­tional cli­mate change policy that looks be­yond sci­en­tific, tech­ni­cal and poli­ti­cal solu­tions to the in­ex­tri­ca­ble link be­tween crises of cli­mate change, poverty, ex­treme in­equal­ity, and racial and eco­nomic in­jus­tice. Fi­nally, I con­tend that com­mu­ni­ca­tors and ac­tivists must work to­gether to fore­ground so­cial, racial, and eco­nomic in­equities in or­der to suc­cess­fully ad­dress the ex­is­ten­tial threat of cli­mate change.

De­mon­stra­bly Safe Self-repli­cat­ing Man­u­fac­tur­ing Sys­tems: Ban­ish­ing the Halt­ing Prob­lem—Or­ga­ni­za­tional and Finite State Ma­chine Con­trol Paradigms

Pro­grammable man­u­fac­tur­ing sys­tems ca­pa­ble of self-repli­ca­tion closely cou­pled with (and like­wise ca­pa­ble of pro­duc­ing) en­ergy con­ver­sion sub­sys­tems and en­vi­ron­men­tal raw ma­te­ri­als col­lec­tion and pro­cess­ing sub­sys­tems (e.g. robotics) promise to rev­olu­tionize many as­pects of tech­nol­ogy and econ­omy, par­tic­u­larly in con­junc­tion with molec­u­lar man­u­fac­tur­ing. The in­her­ent abil­ity of these tech­nolo­gies to self-am­plify and scale offers vast ad­van­tages over con­ven­tional man­u­fac­tur­ing paradigms, but if poorly de­signed or op­er­ated could pose un­ac­cept­able risks. To en­sure that the benefits of these tech­nolo­gies, which in­clude sig­nifi­cantly im­proved fea­si­bil­ity of near-term restora­tion of prein­dus­trial at­mo­spheric CO2 lev­els and ocean pH, en­vi­ron­men­tal re­me­di­a­tion, sig­nifi­cant and rapid re­duc­tion in global poverty and wide­spread im­prove­ments in man­u­fac­tur­ing, en­ergy, medicine, agri­cul­ture, ma­te­ri­als, com­mu­ni­ca­tions and in­for­ma­tion tech­nol­ogy, con­struc­tion, in­fras­truc­ture, trans­porta­tion, aerospace, stan­dard of liv­ing, and longevity, are not eclipsed by ei­ther pub­lic fears of neb­u­lous catas­tro­phe or ac­tual con­se­quen­tial ac­ci­dents, we pro­pose safe de­sign, op­er­a­tion and use paradigms. We dis­cuss de­sign of con­trol and op­er­a­tional man­age­ment paradigms that pre­clude un­con­trol­led repli­ca­tion, with em­pha­sis on the com­pre­hen­si­bil­ity of these safety mea­sures in or­der to fa­cil­i­tate both clear an­a­lyz­abil­ity and pub­lic ac­cep­tance of these tech­nolo­gies. Finite state ma­chines are cho­sen for con­trol of self-repli­cat­ing sys­tems be­cause they are sus­cep­ti­ble to com­pre­hen­sive anal­y­sis (ex­haus­tive enu­mer­a­tion of states and tran­si­tion vec­tors, as well as anal­y­sis with es­tab­lished logic syn­the­sis tools) with pre­dictabil­ity more prac­ti­cal than with more com­plex Tur­ing-com­plete con­trol sys­tems (cf. un­de­cid­abil­ity of the Halt­ing Prob­lem) [1]. Or­ga­ni­za­tions must give un­con­di­tional pri­or­ity to safety and do so trans­par­ently and au­ditably, with de­ci­sion-mak­ers and ac­tors con­tin­u­ously eval­u­ated sys­tem­at­i­cally; some ram­ifi­ca­tions of this are dis­cussed. Rad­i­cal trans­parency like­wise re­duces the chances of mi­suse or abuse.

The cor­po­rate cap­ture of sus­tain­able de­vel­op­ment and its trans­for­ma­tion into a ‘good An­thro­pocene’ his­tor­i­cal bloc

In­spired by An­to­nio Gram­sci’s anal­y­sis of bour­geois hege­mony and his the­o­ret­i­cal for­mu­la­tion of his­tor­i­cal blocs, this pa­per at­tempts to ex­plain how the con­cept and prac­tice of sus­tain­able de­vel­op­ment were cap­tured by cor­po­rate in­ter­ests in the last few decades of the twen­tieth cen­tury and how they were trans­formed into what we can name a ‘good An­thro­pocene’ his­tor­i­cal bloc at the be­gin­ning of the twenty-first cen­tury. This cor­po­rate cap­ture is the­o­rised in terms of the transna­tional cap­i­tal­ist class as rep­re­sented by cor­po­rate, statist/​poli­ti­cal, pro­fes­sional and con­sumerist frac­tions op­er­at­ing at all lev­els of an in­creas­ingly global­is­ing world. In this es­say, I pro­pose the term ‘crit­i­cal An­thro­pocene nar­ra­tive’, high­light­ing the dan­gers posed by the An­thro­pocene and the need for rad­i­cal sys­tems’ change en­tailing the end of cap­i­tal­ism and the hi­er­ar­chi­cal state. The crit­i­cal An­thro­pocene nar­ra­tive, thus, stands in rad­i­cal op­po­si­tion to the ‘good An­thro­pocene’ nar­ra­tive which I ar­gue was in­vented as a strat­egy to defend the so­cio-eco­nomic sta­tus quo by the pro­po­nents of sus­tain­able de­vel­op­ment and their suc­ces­sors in the An­thro­pocene era, de­spite the good in­ten­tions of many en­vi­ron­men­tal­ists work­ing in cor­po­ra­tions, gov­ern­ments, NGOs, and in­ter­na­tional or­ga­ni­za­tions. The pa­per con­cludes with some sug­ges­tions on how to deal with the po­ten­tial ex­is­ten­tial threats to the sur­vival of hu­man­ity.

Hu­man-free earth: the near­est fu­ture, or a fan­tasy? A les­son from artists

We, the peo­ple of planet Earth are head­ing for ex­tinc­tion. What is more, we deny re­al­ity by deny­ing facts. First, we just need to see them and ad­mit their ex­is­tence. Without do­ing that, our species can­not sur­vive. This pa­per pre­sents an artis­tic vi­sion of Hu­man-Free Earth pre­sented at the ex­hi­bi­tion on Ujaz­dowski Cas­tle, War­saw, Poland. The artists, all of them with­out ex­cep­tion, are show­ing us how our home will look like very soon. In iso­la­tion from the sci­en­tific stud­ies pre­sented also at this pa­per, the works of the artists will seem to be ab­stract and de­tached vi­sions of a few peo­ple. Yet those vi­sions over­lap with cur­rent knowl­edge and so are even more ter­rify­ing. Fur­ther­more, a sim­ple anal­y­sis was done to prove why peo­ple ig­nore clear signs of en­vi­ron­men­tal changes. Over­all, ex­ist­ing pa­pers and re­ports in­di­cate that restor­ing na­ture to the state it was be­fore the in­dus­trial rev­olu­tion is im­pos­si­ble, and with­out the plane­tary poli­ti­cal will, hu­mankind will share the fate of the species it has already de­stroyed.

Re­cent progress on cas­cad­ing failures and re­cov­ery in in­ter­de­pen­dent networks

Com­plex net­works have gained much at­ten­tion in the past 20 years, with thou­sands of pub­li­ca­tions due to their broad in­ter­est and ap­pli­ca­bil­ity. Stud­ies ini­tially fo­cused on the func­tion­al­ity of iso­lated sin­gle net­works. How­ever, cru­cial com­mu­ni­ca­tion sys­tems, in­fras­truc­ture net­works and oth­ers are usu­ally cou­pled to­gether and can be mod­eled as in­ter­de­pen­dent net­works, hence, since 2010 the fo­cus has shifted to the study of the more gen­eral and re­al­is­tic case of cou­pled net­works, called Net­works of Net­works (NON). Due to in­ter­de­pen­den­cies be­tween the net­works, NON can suffer from cas­cad­ing failures lead­ing to abrupt catas­trophic col­lapse. In this re­view, us­ing the per­spec­tive of statis­ti­cal physics and net­work sci­ence, we will mainly dis­cuss re­cent progress in un­der­stand­ing the ro­bust­ness of NON hav­ing cas­cad­ing failures fea­tures that are re­al­is­tic for in­fras­truc­ture net­works. We also dis­cuss in this re­view strate­gies of pro­tect­ing and re­pairing NON.

In­te­grated emer­gency man­age­ment and risks for mass ca­su­alty emergencies

To­day it is ob­served the in­tense growth of var­i­ous global wide scale threats to civ­i­liza­tion, such as nat­u­ral and man­made catas­tro­phes, ecolog­i­cal im­bal­ance, global cli­mate change, nu­mer­ous haz­ards pol­lu­tions of large ter­ri­to­ries and di­rected ter­ror­ist at­tacks, re­sulted to huge dam­ages and mass ca­su­alty emer­gen­cies. The hu­mankind has faced the ma­jor­ity of treats at the first time. There­fore, there are no analogues and means to be used for their solv­ing. It stim­u­lates mod­ern­iza­tion of tra­di­tional meth­ods and de­vel­op­ment of new ones for its re­search­ing, pre­dic­tion and pre­ven­tion with max­i­mum pos­si­ble de­creas­ing of their nega­tive con­se­quences. The global is­sue of safety pro­vi­sion for the hu­mankind is the most ac­tual and re­quires an im­me­di­ate de­ci­sion. Catas­tro­phe risks have in­creased so much, that it be­comes ev­i­dent, that none of the states is able to man­age them in­de­pen­dently. Join efforts of all world com­mu­nity are nec­es­sary for the sub­stan­tial de­vel­op­ment of our civ­i­liza­tion. Main ob­sta­cles for this re­al­iza­tion are un­der dis­cus­sion. The au­thors of this ar­ti­cle have their own ex­pe­rience and meth­ods in this di­rec­tion. Wide scale global catas­tro­phes have not any bound­aries. Any poli­ti­cal and eco­nomic fric­tions be­tween some states are not the rea­sons for the im­ple­men­ta­tion of the strug­gle against them. The to­tal emer­gency recom­men­da­tions and ac­tions have to be im­proved to elimi­nate and soft­ware of nega­tive dis­aster’s re­sponses on pop­u­la­tion and en­vi­ron­ment. Some our ex­am­ples of re­al­iza­tion with us­ing of own In­te­grated Emer­gency Man­age­ment and us­ing of spe­cial meth­ods and tech­niques in the most crit­i­cal situ­a­tions, which have taken place in differ­ent coun­tries in 21 cen­tury.

Rec­on­cili­a­tion of na­tions for the sur­vival of humankind

The pa­per ex­plores the his­tory and the re­al­ity of rec­on­cili­a­tion of na­tions, which is in­evitable and vi­tal for the sur­vival of hu­man kind. It first em­pha­sizes the very need for the peace and rec­on­cili­a­tion through three ex­am­ples of na­tional rec­on­cili­a­tion both in­ter­nal and ex­ter­nal: rec­on­cili­a­tion be­tween France and Ger­many af­ter the con­tin­u­ous war since 1813, rec­on­cili­a­tion be­tween Ger­many and Poland af­ter the World War II, and the rec­on­cili­a­tion be­tween Ger­many and Ger­many, the very re­cent peace move­ment. Then the pa­per warns the crude re­al­ity against the pur­suit of peace and rec­on­cili­a­tion in­clud­ing the grow­ing na­tion­al­is­tic power-poli­tics, nu­clear threat, and ecolog­i­cal as well as en­vi­ron­men­tal com­pli­ca­tion. There is still the ar­dent hope, how­ever, in trans­form­ing the na­tional for­eign policy into a world home policy.

Prospects for the use of new tech­nolo­gies to com­bat mul­ti­drug-re­sis­tant bacteria

The in­creas­ing use of an­tibiotics is be­ing driven by fac­tors such as the ag­ing of the pop­u­la­tion, in­creased oc­cur­rence of in­fec­tions, and greater prevalence of chronic dis­eases that re­quire an­timicro­bial treat­ment. The ex­ces­sive and un­nec­es­sary use of an­tibiotics in hu­mans has led to the emer­gence of bac­te­ria re­sis­tant to the an­tibiotics cur­rently available, as well as to the se­lec­tive de­vel­op­ment of other microor­ganisms, hence con­tribut­ing to the wide­spread dis­sem­i­na­tion of re­sis­tance genes at the en­vi­ron­men­tal level. Due to this, at­tempts are be­ing made to de­velop new tech­niques to com­bat re­sis­tant bac­te­ria, among them the use of strictly lytic bac­te­riophage par­ti­cles, CRISPR–Cas, and nan­otech­nol­ogy. The use of these tech­nolo­gies, alone or in com­bi­na­tion, is promis­ing for solv­ing a prob­lem that hu­man­ity faces to­day and that could lead to hu­man ex­tinc­tion: the dom­i­na­tion of pathogenic bac­te­ria re­sis­tant to ar­tifi­cial drugs. This prospec­tive pa­per dis­cusses the po­ten­tial of bac­te­riophage par­ti­cles, CRISPR–Cas, and nan­otech­nol­ogy for use in com­bat­ing hu­man (bac­te­rial) in­fec­tions.