Centre for the Study of Existential Risk: Six Month Report May-October 2018

We have just pre­pared a Six Month Re­port for our Man­age­ment Board. This is a pub­lic ver­sion of that Re­port. We send short monthly up­dates in our newslet­ter – sub­scribe here.

Con­tents:

  1. Overview

  2. Policy and In­dus­try Engagement

  3. Aca­demic Engagement

  4. Public Engagement

  5. Re­cruit­ment and re­search team

  6. Ex­pert Work­shops and Public Lectures

  7. Up­com­ing activities

  8. Publications

1. Overview

The Cen­tre for the Study of Ex­is­ten­tial Risk (CSER) is an in­ter­dis­ci­plinary re­search cen­tre within the Univer­sity of Cam­bridge ded­i­cated to the study and miti­ga­tion of risks that could lead to civ­i­liza­tional col­lapse or hu­man ex­tinc­tion. We study ex­is­ten­tial risk, de­velop col­lab­o­ra­tive strate­gies to re­duce them, and foster a global com­mu­nity of aca­demics, tech­nol­o­gists and policy-mak­ers work­ing to tackle these risks. Our re­search fo­cuses on Global Catas­trophic Biolog­i­cal Risks, Ex­treme Risks and the Global En­vi­ron­ment, Risks from Ar­tifi­cial In­tel­li­gence, and Manag­ing Ex­treme Tech­nolog­i­cal Risks.

Our last Man­age­ment Board Re­port was in May 2018. Over the last six months we have con­tinued to ad­vance ex­is­ten­tial risk re­search and grow the com­mu­nity work­ing in the field:

  • Publi­ca­tion of twelve pa­pers on top­ics in­clud­ing sci­en­tific com­mu­ni­ties and risk, gov­ern­ment re­ac­tions to dis­asters, en­vi­ron­men­tal as­sess­ment of high-yield farm­ing, de­ci­sion the­ory, and the­o­ret­i­cal map­ping of ar­tifi­cial in­tel­li­gence;

  • Publi­ca­tion of our Spe­cial Is­sue Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk fea­tur­ing fif­teen pa­pers, many first pre­sented at our 2016 Con­fer­ence;

  • Host­ing five ex­pert work­shops, helping us to es­tab­lish/​con­soli­date part­ner­ships with im­por­tant play­ers such as the Sin­ga­porean Govern­ment, the UK Defence Science and Tech­nol­ogy Lab­o­ra­tory, Mu­nich Univer­sity, nu­clear ex­perts and MIT;

  • Policy-maker en­gage­ment with UK Par­li­a­men­tar­i­ans and civil ser­vants, and at the United Na­tions, where we helped lead a track of the AI for Good sum­mit se­ries;

  • Aca­demic en­gage­ment, build­ing the ex­is­ten­tial risk field by host­ing vis­it­ing re­searchers and pre­sent­ing at lead­ing con­fer­ences;

  • In­dus­try en­gage­ment, tap­ping into cut­ting-edge R&D and nudg­ing com­pa­nies to­wards re­spon­si­bil­ity;

  • Re­cruited two new post­doc­toral re­searchers, a new ad­minis­tra­tor, and a Se­nior Re­search As­so­ci­ate: Aca­demic Pro­gramme Man­ager;

  • Con­tinued suc­cess in fundrais­ing for CSER’s next stage;

  • En­gag­ing the pub­lic through me­dia cov­er­age (in­clud­ing on News­night) and two pub­lic lec­tures with dis­t­in­guished speak­ers; and

  • The re­lease of Lord Martin Rees’ new book, On The Fu­ture: Prospects for Hu­man­ity.

2. Policy and In­dus­try Engagement

We have had the op­por­tu­nity to speak di­rectly with poli­cy­mak­ers, in­dus­try-lead­ers and in­sti­tu­tions across the world who are grap­pling with the difficult and novel challenge of how to un­lock the so­cially benefi­cial as­pects of new tech­nolo­gies while miti­gat­ing their risks. Through ad­vice and dis­cus­sions, we have the op­por­tu­nity to re­frame the policy de­bate and to hope­fully shape the tra­jec­tory of these tech­nolo­gies them­selves.

  • The All-Party Par­li­a­men­tary Group for Fu­ture Gen­er­a­tions held two events in Par­li­a­ment. The APPG was set up by Cam­bridge stu­dents men­tored by CSER re­searchers. This con­tinues our en­gage­ment of UK par­li­a­men­tar­i­ans on ex­is­ten­tial risk top­ics:

    • Black Sky risks and in­fras­truc­ture re­silience. The main speaker for the evening was Lord Toby Har­ris, UK co­or­di­na­tor for the Elec­tric­ity In­fras­truc­ture Re­silience Coun­cil. Julius Weitz­do­erfer and Dr Beard also spoke. Overview.

    • How do We Make AI Safe for Hu­mans? This event’s speak­ers were Ed­ward Felten, former Deputy White House CTO, Joanna Bryson, Reader in AI at the Univer­sity of Bath, Nick Bostrom., Direc­tor of the Fu­ture of Hu­man­ity In­sti­tute, and our own Sha­har Avin. Overview.

  • The AI for Good Sum­mit se­ries is the lead­ing United Na­tions plat­form for di­alogue on AI. As the UN seeks to en­hance its ca­pac­ity to ad­dress AI is­sues, we have been in­vited to share our re­search and ex­per­tise. In May, a joint CSER/​CFI team led one of the Sum­mit’s four ‘Tracks’, on Trust in AI. This meant we were able to di­rectly shape which top­ics global policy-mak­ers from many coun­tries and UN de­part­ments en­gaged with, and helped set the agenda for the next year. Overview.

  • Sha­har Avin has had ex­ten­sive en­gage­ment around the ma­jor re­port The Mal­i­cious Use of Ar­tifi­cial In­tel­li­gence, of which he was the joint lead au­thor. He has pre­sented to the UK Cabi­net Office, the US’ Pa­cific North­west Na­tional Lab­o­ra­tory (PNNL), to the Dutch Em­bassy, and at the Stock­holm In­ter­na­tional Peace Re­search In­sti­tute (SIPRI)work­shop “Map­ping the Im­pact of Ma­chine Learn­ing and Au­ton­omy on Strate­gic Sta­bil­ity and Nu­clear Risk”.

  • Dr Avin con­tributed to the Digi­tal Cat­a­pult AI Ethics Frame­work. Digi­tal Cat­a­pult is the UK’s lead­ing digi­tal tech­nol­ogy in­no­va­tion cen­tre, funded by the Govern­ment. The Frame­work is in­tended to be used by AI start-ups. The in­ten­tion is to nudge AI com­pa­nies at an early stage, when they are more eas­ily in­fluenced.

  • We con­tinued our in­dus­try en­gage­ment. Ex­tend­ing our links im­proves our re­search by ex­pos­ing us to the cut­ting edge of in­dus­trial R&D, and helps to nudge pow­er­ful com­pa­nies to­wards more re­spon­si­ble prac­tices. Seán Ó hÉigeartaigh and Dr Avin pre­sented to Arm, a lead­ing semi­con­duc­tor and soft­ware de­sign com­pany in Cam­bridge.

  • CSER re­searchers con­tinued meet­ings with top UK civil ser­vants as part of the policy fel­lows pro­gram or­ga­nized by the Cen­tre for Science and Policy (CSaP).

3. Aca­demic En­gage­ment:

As an in­ter­dis­ci­plinary re­search cen­tre within the Univer­sity of Cam­bridge, we seek to grow the aca­demic field of ex­is­ten­tial risk re­search, so that this im­por­tant topic re­ceives the rigor­ous and de­tailed at­ten­tion it de­serves.

  • Visit­ing re­searchers: We have had sev­eral vis­i­tors, in­clud­ing Dr Rush Ste­wart from the Mu­nich Cen­tre for Math­e­mat­i­cal Philos­o­phy, Dr Frank Ro­evekamp, work­ing on in­sur­ing against hid­den ex­is­ten­tial risks, and Prof David Alexan­der, Pro­fes­sor of Risk & Disaster Re­duc­tion at UCL’s In­sti­tute for Risk & Disaster Re­duc­tion.

  • Julius Weitzdörfer gave pre­sen­ta­tions at UCLA for the Quan­tify­ing Global Catas­trophic Risks Work­shop at the Gar­rick In­sti­tute for Risk Sciences, and at a Spe­cial Ses­sion on Global and catas­trophic risks at the 14th Prob­a­bil­is­tic Safety Assess­ment & Man­age­ment Con­fer­ence. He also gave talks on Disaster, Law and So­cial Jus­tice in Ja­pan at the New Per­spec­tives in Ja­panese Law con­fer­ence at Har­vard Law School and the East Asia Sem­i­nar Series in Cam­bridge.

  • In Cam­bridge, Cather­ine Rhodes and Sam Weiss Evans pre­sented on the re­spon­si­ble gov­er­nance of syn­thetic biol­ogy gov­er­nance. Dr Rhodes and Lal­itha Sun­daram are co­or­di­na­tors of the OpenIP Models of Emerg­ing Tech­nolo­gies sem­i­nar se­ries.

  • Haydn Belfield and Sha­har Avin met col­lab­o­ra­tors and donors in San Fran­cisco in June, and led work­shops at the Effec­tive Altru­ism Global con­fer­ence.

  • Dr Avin pre­sented at the first AI Hu­man­i­ties con­fer­ence in Seoul, at the Deep Learn­ing in Fi­nance Sum­mit, the Big Data & So­ciety con­fer­ence at Lon­don Metropoli­tan Univer­sity, a HSBC risk train­ing event at the Judge Busi­ness School, and to Cam­bridge Com­puter Science Masters stu­dents. He also at­tended the Ori­gins work­shop Ar­tifi­cial In­tel­li­gence and Au­tonomous Weapons Sys­tems at Ari­zona State Univer­sity, at­tended by former Sec­re­tary of Defence William J. Perry.

  • We con­tinued our sup­port for the stu­dent-run Eng­ineer­ing Safe AI read­ing group. The in­ten­tion is to ex­pose mas­ters and PhD stu­dents to in­ter­est­ing AI safety re­search, so they con­sider ca­reers in that area.

4. Public En­gage­ment:

We’re able to reach far more peo­ple with our re­search:
- Since our new site launched in Aug 2017, we’ve had 53,726 vis­i­tors.
- 6,394 newslet­ter sub­scribers, up from 4,863 in Oct 2016.
- Face­book fol­low­ers have tripled since Dec 2016, from 627 to 2,049.
- Twit­ter fol­low­ers have sex­tu­pled since Dec 2016, from 778 to 5,184.
  • Si­mon also ap­peared on BBC Ra­dio with an es­say on ex­is­ten­tial risk and Dou­glas Adams: What Do You Do If You Are a Man­i­cally De­pressed Robot?

  • Adrian Cur­rie ap­peared on the Naked Scien­tists ra­dio show, on the epi­sode Planet B: Should we leave Earth?

  • Adrian also held a book launch for Rock, Bone and Ruin: An Op­ti­mist’s Guide to the His­tor­i­cal Sciences at the Whip­ple Mu­seum of the His­tory of Science in May.

  • Sha­har Avin gave a pod­cast in­ter­view to Cal­cal­ist, Is­rael’s most pop­u­lar eco­nomic daily news­pa­per.

  • Cather­ine Rhodes gave a ‘Min­erva Talk’ on Science, So­ciety and the End of the World, at St James Se­nior Girls School, Lon­don.

  • Vi­sion pub­lisher David Hulme spoke to Seán Ó hÉigeartaigh about AI and ex­is­ten­tial risk, and re­leased an ar­ti­cle and hour-long video in­ter­view.

5. Re­cruit­ment and re­search team:

We have just ap­pointed a new Re­search Pro­ject Ad­minis­tra­tor – Clare Arn­stein, who will start in early De­cem­ber, and is cur­rently Ex­ec­u­tive As­sis­tant to the Vice Chan­cel­lor (on sec­ond­ment from the School of Arts and Hu­man­i­ties). We have also just re­cruited an ad­di­tional Se­nior Re­search As­so­ci­ate as an Aca­demic Pro­gramme Man­ager.

New Post­doc­toral Re­search As­so­ci­ates:

  • Dr Luke Kemp will work on the hori­zon-scan­ning and fore­sight strand of the Manag­ing Ex­treme Tech­nolog­i­cal Risks pro­ject. Luke has a back­ground in in­ter­na­tional re­la­tions, par­tic­u­larly in re­la­tion to cli­mate change policy and ne­go­ti­a­tions, and has been work­ing re­cently as an eco­nomics con­sul­tant. Luke is in­ter­ested in ap­ply­ing sys­tems ap­proaches to fore­cast­ing of ex­treme tech­nolog­i­cal risks, and match­ing with miti­ga­tion and pre­ven­tion strate­gies.

  • Dr Lau­ren Holt will work on biolog­i­cal risks, in par­tic­u­lar pro­vid­ing sup­port for Lal­itha Sun­daram on the new Sch­midt Sciences pro­ject on Ex­treme Risks from Chronic Disease Threats. Lau­ren has a back­ground in zo­ol­ogy and ap­plied ecol­ogy. Joins us from the En­vi­ron­ment and Sus­tain­abil­ity In­sti­tute at the Univer­sity of Ex­eter. Lau­ren’s also been in­volved with sci­ence com­mu­ni­ca­tion and pub­lic en­gage­ment pro­jects, and is plan­ning to de­velop a ca­reer in sci­ence policy.

  • Asaf Tza­chor is ex­pected to join­ing us for about a year. He will work on a pro­ject on food se­cu­rity, vuln­er­a­bil­ities in the global food sys­tem, and global catas­trophic risk sce­nar­ios. He re­cently finished his doc­torate at UCL’s Depart­ment of Science, Tech­nol­ogy, Eng­ineer­ing and Public Policy as a Gold­man Scholar. He is a Fel­low of the Royal Geo­graph­i­cal So­ciety (RGS), and was Head of Strat­egy and Sus­tain­abil­ity at the Ministry of En­vi­ron­ment (Is­rael). He has also writ­ten and ed­ited a dozen na­tional re­ports, books, aca­demic ar­ti­cles, and gov­ern­ment re­s­olu­tions. He has also taught in the In­ter­dis­ci­plinary Cen­ter Her­zliya, School of Sus­tain­abil­ity and School of Govern­ment (Is­rael’s top-ranked pri­vate col­lege).

Visit­ing re­searchers:

  • Dr Sam Weiss-Evans, As­sis­tant Pro­fes­sor in the Pro­gram on Science, Tech­nol­ogy and So­ciety at Tufts Univer­sity, is vis­it­ing CSER from Septem­ber 2018 – July 2019, and will be com­plet­ing a book manuscript on the gov­er­nance of se­cu­rity con­cerns in sci­ence and tech­nol­ogy. Sam is also work­ing to build a col­lab­o­ra­tion be­tween CSER, MIT and the US Na­tional Academies on in­no­va­tive ap­proaches to gov­ern­ing dual use re­search.

New CSER Re­search Affili­ates:

  • Dr Adrian Cur­rie left CSER in Septem­ber for a lec­ture­ship in Philos­o­phy at Ex­eter Univer­sity but con­tinues to col­lab­o­rate with CSER and CFI on sci­ence and cre­ativity.

  • Dai­kichi Seki is a JSPS funded PhD stu­dent at the Grad­u­ate In­sti­tute of Ad­vanced In­te­grated Stud­ies in Hu­man Sur­viv­abil­ity (GSAIS), Ky­oto Univer­sity. He is plan­ning a visit to DAMPT next year to work on so­lar as­pects of space weather, and will also spend some time with CSER to re­flect on so­cial as­pects of the is­sue. This will be an ini­tial phase in col­lab­o­ra­tion be­tween GSAIS and CSER, with a plan to make a joint ap­pli­ca­tion to the Nip­pon /​ Sasakawa Foun­da­tion.

  • Yas­mine Rix has been ac­tively en­gaged with CSER’s work over the past few years, and will be cu­rat­ing an ex­hi­bi­tion ‘Ground Zero Earth’ in the Ali­son Richard Build­ing in Fe­bru­ary and March 2019, which will con­nect themes of CSER’s re­search to the work of sev­eral emerg­ing artists. She has se­cured in kind sup­port from CRASSH, and fund­ing from Cam­bridge Busi­ness In­no­va­tion District. She will help us run a pub­lic panel at the launch of the ex­hi­bi­tion and will be do­ing some school en­gage­ment work as well.

  • Zoe Cre­mer is a vis­it­ing stu­dent from ETH Zurich based at CFI for the 2018/​2019 aca­demic year, work­ing with Sean O hEigeartaigh and Marta Hal­ina on mod­els of progress in ar­tifi­cial in­tel­li­gence. Her work in­ter­sects with a num­ber of CSER top­ics, and she will be a reg­u­lar par­ti­ci­pant in CSER re­search meet­ings.

  • Dr Tat­suya Amano will be leav­ing in Jan­uary to be­come a pres­ti­gious Aus­tralian Re­search Coun­cil Fu­ture Fel­low at the Univer­sity of Queensland. When he does so, we in­tend to pro­pose him as a Re­search Affili­ate.

6. Ex­pert Work­shops and Public Lec­tures:

Our events over the last few months have in­cluded:

  • July: De­ci­sion The­ory & the Fu­ture of Ar­tifi­cial In­tel­li­gence Work­shop (led by Huw Price and Yang Liu). Held in Mu­nich, it was the sec­ond in a work­shop se­ries that brings to­gether philoso­phers, de­ci­sion the­o­rists, and AI re­searchers in or­der to pro­mote de­ci­sion the­ory re­search that could help make AI safer. It con­soli­dated our part­ner­ship with the Mu­nich Cen­ter for Math­e­mat­i­cal Philos­o­phy, a leader in this area.

  • Septem­ber: Work­shops with the Sin­ga­porean Govern­ment. CSER, CFI and the Cen­tre for Strate­gic Fu­tures (part of the Sin­ga­porean Prime Minister’s Office) co-or­ganised a se­ries of work­shops in Sin­ga­pore that ex­plored ex­is­ten­tial risk, fore­sight, and AI. It helped con­soli­date our re­la­tion­ship with the Sin­ga­porean Govern­ment, an in­fluen­tial and far-sighted global player.

  • Septem­ber: Plu­to­nium, Sili­con and Car­bon Work­shop (led by Sha­har Avin). It ex­plored cy­ber­se­cu­rity risks to nu­clear weapons sys­tems in the con­text of ad­vances in AI and ma­chine learn­ing. It might lead to a pa­per with key ex­perts from nu­clear se­cu­rity, AI and cy­ber­se­cu­rity. It also fur­thered col­lab­o­ra­tion with the United Na­tions Disar­ma­ment Re­search Cen­tre—CSER re­searchers vis­ited UNIDIR in Geneva in Novem­ber.

    • Fol­lowed by a Public Lec­ture by Dr Wade Huntley on ‘North Korea’s Nu­clear Policy’. Dr Wade Huntley teaches at the US Naval Post­grad­u­ate School and has pub­lished work on US strate­gic poli­cies, East and South Asian re­gional se­cu­rity, and in­ter­na­tional re­la­tions the­ory.

  • Oc­to­ber: Epistemic Se­cu­rity Work­shop (led by Sha­har Avin). This be­gan a se­ries of work­shops co-or­ganised with the UK’s Alan Tur­ing In­sti­tute, look­ing at the chang­ing threat land­scape of in­for­ma­tion cam­paigns and pro­pa­ganda, given cur­rent and ex­pected ad­vances in ma­chine learn­ing.

  • Oc­to­ber: Gen­er­al­ity and In­tel­li­gence: from Biol­ogy to AI Work­shop (led by Seán Ó hÉigeartaigh). It ex­plored how to eval­u­ate progress in ar­tifi­cial in­tel­li­gence in the con­text of differ­ent defi­ni­tions of gen­er­al­ity. It be­gan the Cam­bridge² work­shop se­ries that will take place in Cam­bridge, UK, and Cam­bridge, MA, in the fol­low­ing two years, co-or­ganised by the MIT-IBM Wat­son AI Lab and the Lev­er­hulme Cen­tre for the Fu­ture of In­tel­li­gence. MIT is a ma­jor player in AI re­search and de­vel­op­ment, re­cently launch­ing a $1bn new school for AI.

  • Oc­to­ber: Public Lec­ture by Dr Eli Fenichel on ‘Devel­op­ments in the mea­sure­ment of nat­u­ral cap­i­tal to ad­vance sus­tain­abil­ity as­sess­ment’. Dr Fenichel is an As­so­ci­ate Pro­fes­sor at Yale Univer­sity. This lec­ture was co-or­ganised with the Cam­bridge Con­ser­va­tion Ini­ti­a­tive.

7. Up­com­ing activities

Four books will be pub­lished in early 2019:

  • Ex­tremes (Cam­bridge Univer­sity Press), ed­ited by Julius Weitzdörfer and Dun­can Need­ham, draws on the 2017 Dar­win Col­lege Lec­ture Series Julius co-or­ganised. It fea­tures con­tri­bu­tions from Emily Shuck­burgh, Nas­sim Ni­cholas Taleb, David Runci­man, and oth­ers.

  • Biolog­i­cal Ex­tinc­tions: New Per­spec­tives (Cam­bridge Univer­sity Press) is ed­ited by Partha Das­gupta, and draws upon the 2017 work­shop with the Vat­i­can’s Pon­tif­i­cal Academy of Sciences he co-or­ganised.

  • Fukushima and the Law (Cam­bridge Univer­sity Press) is ed­ited by Julius Weitzdörfer and Kris­tian Lauta, and draws upon a 2016 work­shop FUKUSHIMA – Five Years On, which Julius co-or­ganised.

  • Time and the Gen­er­a­tions—pop­u­la­tion ethics for a diminish­ing planet (New York: Columbia Univer­sity Press), by Partha Das­gupta. This is based on Prof Das­gupta’s Ken­neth Ar­row Lec­tures de­liv­ered at Columbia Univer­sity.

Up­com­ing events:

  • Novem­ber, Jan­uary: Epistemic Se­cu­rity Work­shop (led by Sha­har Avin). Next in the se­ries of work­shops co-or­ganised with the Alan Tur­ing In­sti­tute, look­ing at the chang­ing threat land­scape of in­for­ma­tion cam­paigns and pro­pa­ganda, given cur­rent and ex­pected ad­vances in ma­chine learn­ing.

  • Jan­uary: We are co-or­ganis­ing the SafeAI 2019 Work­shop, the As­so­ci­a­tion for the Ad­vance­ment of Ar­tifi­cial In­tel­li­gence’s (AAAI) Work­shop on AI Safety.

  • Fe­bru­ary/​March: Ground Zero Earth Art Ex­hi­bi­tion. We are col­lab­o­rat­ing with Yas­mine Rix on this art ex­hi­bi­tion at the Ali­son Richard Build­ing, to en­gage aca­demics and the pub­lic in our re­search. The launch event will be on the evening of the 14 Fe­bru­ary.

Timing to be con­firmed:

  • Spring: Cost-benefit Anal­y­sis of Tech­nolog­i­cal Risk Work­shop (led by Si­mon Beard).

  • Spring: Gen­er­al­ity and In­tel­li­gence: from Biol­ogy to AI. The next in the Cam­bridge² work­shop se­ries, co-or­ganised by the MIT-IBM Wat­son AI Lab and the Lev­er­hulme Cen­tre for the Fu­ture of In­tel­li­gence. MIT is a ma­jor player in AI, re­cently launch­ing a $1bn new school for AI.

  • Sum­mer: Cul­ture of Science—Se­cu­rity and Dual Use Work­shop (led by Sam Weiss Evans).

  • Sum­mer: Biolog­i­cal Ex­tinc­tion sym­po­sium, around the pub­li­ca­tion of Sir Partha’s book.

  • Sum­mer: De­ci­sion The­ory & the Fu­ture of Ar­tifi­cial In­tel­li­gence Work­shop (led by Huw Price and Yang Liu). The third work­shop in a se­ries bring­ing to­gether philoso­phers, de­ci­sion the­o­rists, and AI re­searchers in or­der to pro­mote re­search at the nexus be­tween de­ci­sion the­ory and AI. Co-or­ganised with the Mu­nich Cen­ter for Math­e­mat­i­cal Philos­o­phy.

  • Au­tumn: Hori­zon-Scan­ning work­shop (led by Luke Kemp).

  • Public lec­tures: we will con­tinue to hold at least six pub­lic lec­tures each year. Most of these will link to one of our work­shops.

8. Publications

Adrian Cur­rie (ed.) (2018) Spe­cial Is­sue: Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk. Fu­tures.

Many of the fif­teen pa­pers in the Spe­cial Is­sue were origi­nally pre­sented at our first Cam­bridge Con­fer­ence on Catas­trophic Risk in 2016, and it in­cludes three pa­pers by CSER re­searchers:

“We pre­sent a novel clas­sifi­ca­tion frame­work for se­vere global catas­trophic risk sce­nar­ios. Ex­tend­ing be­yond ex­ist­ing work that iden­ti­fies in­di­vi­d­ual risk sce­nar­ios, we pro­pose analysing global catas­trophic risks along three di­men­sions: the crit­i­cal sys­tems af­fected, global spread mechanisms, and pre­ven­tion and miti­ga­tion failures. The clas­sifi­ca­tion high­lights ar­eas of con­ver­gence be­tween risk sce­nar­ios, which sup­ports pri­ori­ti­sa­tion of par­tic­u­lar re­search and of policy in­ter­ven­tions. It also points to po­ten­tial knowl­edge gaps re­gard­ing catas­trophic risks, and pro­vides an in­ter­dis­ci­plinary struc­ture for map­ping and track­ing the mul­ti­tude of fac­tors that could con­tribute to global catas­trophic risks.”

“There has been much dis­cus­sion of the moral, le­gal and pru­den­tial im­pli­ca­tions of geo­eng­ineer­ing, and of gov­er­nance struc­tures for both the re­search and de­ploy­ment of such tech­nolo­gies. How­ever, in­suffi­cient at­ten­tion has been paid to how such mea­sures might af­fect geo­eng­ineer­ing in terms of the in­cen­tive struc­tures which un­der­write sci­en­tific progress. There is a ten­sion be­tween the fea­tures that make sci­ence pro­duc­tive, and the need to gov­ern geo­eng­ineer­ing re­search, which has thus far gone un­der­ap­pre­ci­ated. I em­pha­size how geo­eng­ineer­ing re­search re­quires gov­er­nance which reaches be­yond sci­ence’s tra­di­tional bound­aries, and more­over re­quires knowl­edge which it­self reaches be­yond what we tra­di­tion­ally ex­pect sci­en­tists to know about. How we gov­ern emerg­ing tech­nolo­gies should be sen­si­tive to the in­cen­tive struc­tures which drive sci­ence.”

The rest of the pa­pers are:

Scien­tific com­mu­ni­ties and ex­is­ten­tial risk

“Scien­tific free­doms are ex­er­cised within the con­text of cer­tain re­spon­si­bil­ities, which in some cases jus­tify con­straints on those free­doms. (Con­straints that may be in­ter­nally es­tab­lished within sci­en­tific com­mu­ni­ties and/​or ex­ter­nally en­acted.) Biose­cu­rity di­men­sions of work in­volv­ing pathogens are one such case and raise com­plex challenges for sci­ence and policy. The cen­tral is­sues and de­bates are illus­trated well in the de­vel­op­ment of re­sponses to pub­li­ca­tion of (‘gain of func­tion’) re­search in­volv­ing highly pathogenic avian in­fluenza, by a num­ber of ac­tors, in­clud­ing sci­en­tists, jour­nal ed­i­tors, sci­en­tific academies, and na­tional and in­ter­na­tional policy groups.”

“The spe­cial is­sue Creativity, Con­ser­vatism & the So­cial Episte­mol­ogy of Science col­lects six pa­pers which, in differ­ent ways, tackle ‘pro­mo­tion ques­tions’ con­cern­ing sci­en­tific com­mu­ni­ties: which fea­tures shape those com­mu­ni­ties, and which might be changed to pro­mote the kinds of epistemic fea­tures we de­sire. In this in­tro­duc­tion, I con­nect these dis­cus­sions with more tra­di­tional de­bate in the philos­o­phy of sci­ence and re­flect upon the no­tions of cre­ativity which un­der­write the pa­pers.”

“Ex­is­ten­tial risks, par­tic­u­larly those aris­ing from emerg­ing tech­nolo­gies, are a com­plex, ob­sti­nate challenge for sci­en­tific study. This should mo­ti­vate study­ing how the rele­vant sci­en­tific com­mu­ni­ties might be made more amenable to study­ing such risks. I offer an ac­count of sci­en­tific cre­ativity suit­able for think­ing about sci­en­tific com­mu­ni­ties, and provide rea­sons for think­ing con­tem­po­rary sci­ence doesn’t in­cen­tivise cre­ativity in this speci­fied sense. I’ll ar­gue that a suc­cess­ful sci­ence of ex­is­ten­tial risk will be cre­ative in my sense. So, if we want to make progress on those ques­tions we should con­sider how to shift sci­en­tific in­cen­tives to en­courage cre­ativity. The anal­y­sis also has les­sons for philo­soph­i­cal ap­proaches to un­der­stand­ing the so­cial struc­ture of sci­ence. I in­tro­duce the no­tion of a ‘well-adapted’ sci­ence: one in which the in­cen­tive struc­ture is tai­lored to the epistemic situ­a­tion at hand.”

Govern­ment re­ac­tions to disasters

“In East Asia, dis­asters have been re­garded as events which un­cover the mis­takes of the past as much as they provide op­por­tu­ni­ties for build­ing a more just so­ciety. In Ja­pan, this phe­nomenon was cap­tured through the con­cept of “world rec­tifi­ca­tion” (yonaoshi) in the past and con­tinues to lead to the im­prove­ment of dis­aster pre­pared­ness to this day. In the same way, dis­asters in his­tor­i­cal China were not only in­ter­preted as ex­pres­sions of heav­enly wrath for a ruler’s mis­takes, but also as an op­por­tu­nity for bet­ter gov­er­nance. Tak­ing into ac­count the way in which dis­asters si­mul­ta­neously mir­ror ex­ist­ing tra­jec­to­ries and open up space for new ones, this chap­ter com­pares the pro­tec­tion of dis­aster vic­tims in China and Ja­pan by look­ing at two re­cent catas­tro­phes, the 2008 earth­quake in Wenchuan and the earth­quake, tsunami and nu­clear melt­down of 11 March 2011 in east­ern Ja­pan. We pay par­tic­u­lar at­ten­tion to the fram­ing of both dis­asters as ei­ther man-made or nat­u­ral, which car­ries sig­nifi­cant so­cial and poli­ti­cal im­pli­ca­tions. Both gov­ern­ments made use of this dis­tinc­tion to shrug off re­spon­si­bil­ity and to in­fluence mo­bil­i­sa­tion pro­cesses among the vic­tims. The dis­tinc­tion be­tween man-made and nat­u­ral dis­asters also had a sig­nifi­cant in­fluence on the re­sult­ing in­sti­tu­tion­al­i­sa­tion pro­cesses.”

En­vi­ron­men­tal as­sess­ment of high-yield farming

  • An­drew Balm­ford, Tat­suya Amano, Har­riet Bartlett, Dave Chad­wick, Adrian Col­lins, David Ed­wards, Rob Field, Philip Garnswor­thy, Rhys Green, Pete Smith, He­len Waters, An­drew Whit­more, Don­ald M. Broom, Ju­lian Chara, Tom Finch, Emma Gar­nett, Alfred Gathorne-Hardy, Juan Her­nan­dez-Me­drano, Mario Her­rero, Fangyuan Hua, Ag­nieszka Lataw­iec, Tom Mis­selbrook, Ben Pha­lan, Benno I. Sim­mons, Taro Taka­hashi, James Vause, Eras­mus zu Er­m­gassen, Rowan Eis­ner. (2018). The en­vi­ron­men­tal costs and benefits of high-yield farm­ing. Na­ture Sus­tain­abil­ity.

“How we man­age farm­ing and food sys­tems to meet ris­ing de­mand is pivotal to the fu­ture of bio­di­ver­sity. Ex­ten­sive field data sug­gest that im­pacts on wild pop­u­la­tions would be greatly re­duced through boost­ing yields on ex­ist­ing farm­land so as to spare re­main­ing nat­u­ral habitats. High-yield farm­ing raises other con­cerns be­cause ex­pressed per unit area it can gen­er­ate high lev­els of ex­ter­nal­ities such as green­house gas emis­sions and nu­tri­ent losses. How­ever, such met­rics un­der­es­ti­mate the over­all im­pacts of lower-yield sys­tems. Here we de­velop a frame­work that in­stead com­pares ex­ter­nal­ity and land costs per unit pro­duc­tion. We ap­ply this frame­work to di­verse data sets that de­scribe the ex­ter­nal­ities of four ma­jor farm sec­tors and re­veal that, rather than in­volv­ing trade-offs, the ex­ter­nal­ity and land costs of al­ter­na­tive pro­duc­tion sys­tems can co­vary pos­i­tively: per unit pro­duc­tion, land-effi­cient sys­tems of­ten pro­duce lower ex­ter­nal­ities. For green­house gas emis­sions, these as­so­ci­a­tions be­come more strongly pos­i­tive once for­gone se­ques­tra­tion is in­cluded. Our con­clu­sions are limited: re­mark­ably few stud­ies re­port ex­ter­nal­ities alongside yields; many im­por­tant ex­ter­nal­ities and farm­ing sys­tems are in­ad­e­quately mea­sured; and re­al­iz­ing the en­vi­ron­men­tal benefits of high-yield sys­tems typ­i­cally re­quires ad­di­tional mea­sures to limit farm­land ex­pan­sion. Nev­er­the­less, our re­sults sug­gest that trade-offs among key cost met­rics are not as ubiquitous as some­times per­ceived.”

Is­sues in de­ci­sion the­ory rele­vant to ad­vanced ar­tifi­cial intelligence

“Can an agent de­liber­at­ing about an ac­tion A hold a mean­ingful cre­dence that she will do A? ‘No’, say some au­thors, for ‘de­liber­a­tion crowds out pre­dic­tion’ (DCOP). Others dis­agree, but we ar­gue here that such dis­agree­ments are of­ten ter­minolog­i­cal. We ex­plain why DCOP holds in a Ram­seyian op­er­a­tional­ist model of cre­dence, but show that it is triv­ial to ex­tend this model so that DCOP fails. We then dis­cuss a model due to Joyce, and show that Joyce’s re­jec­tion of DCOP rests on ter­minolog­i­cal choices about terms such as ‘in­ten­tion’, ‘pre­dic­tion’, and ‘be­lief’. Once these choices are in view, they re­veal un­der­ly­ing agree­ment be­tween Joyce and the DCOP-favour­ing tra­di­tion that de­scends from Ram­sey. Joyce’s Ev­i­den­tial Au­ton­omy Th­e­sis is effec­tively DCOP, in differ­ent ter­minolog­i­cal cloth­ing. Both prin­ci­ples rest on the so-called ‘trans­parency’ of first-per­son pre­sent-tensed re­flec­tion on one’s own men­tal states.”

The­o­ret­i­cal map­ping of ar­tifi­cial intelligence

“We pre­sent nine facets for the anal­y­sis of the past and fu­ture evolu­tion of AI. Each facet has also a set of edges that can sum­marise differ­ent trends and con­tours in AI. With them, we first con­duct a quan­ti­ta­tive anal­y­sis us­ing the in­for­ma­tion from two decades of AAAI/​IJCAI con­fer­ences and around 50 years of doc­u­ments from AI top­ics, an offi­cial database from the AAAI, illus­trated by sev­eral plots. We then perform a qual­i­ta­tive anal­y­sis us­ing the facets and edges, lo­cat­ing AI sys­tems in the in­tel­li­gence land­scape and the dis­ci­pline as a whole. This an­a­lyt­i­cal frame­work pro­vides a more struc­tured and sys­tem­atic way of look­ing at the shape and bound­aries of AI.”

“We an­a­lyze and re­frame AI progress. In ad­di­tion to the pre­vailing met­rics of perfor­mance, we high­light the usu­ally ne­glected costs paid in the de­vel­op­ment and de­ploy­ment of a sys­tem, in­clud­ing: data, ex­pert knowl­edge, hu­man over­sight, soft­ware re­sources, com­put­ing cy­cles, hard­ware and net­work fa­cil­ities, de­vel­op­ment time, etc. Th­ese costs are paid through­out the life cy­cle of an AI sys­tem, fall differ­en­tially on differ­ent in­di­vi­d­u­als, and vary in mag­ni­tude de­pend­ing on the repli­ca­bil­ity and gen­er­al­ity of the AI solu­tion. The mul­ti­di­men­sional perfor­mance and cost space can be col­lapsed to a sin­gle util­ity met­ric for a user with tran­si­tive and com­plete prefer­ences. Even ab­sent a sin­gle util­ity func­tion, AI ad­vances can be gener­i­cally as­sessed by whether they ex­pand the Pareto (op­ti­mal) sur­face. We ex­plore a sub­set of these ne­glected di­men­sions us­ing the two case stud­ies of Alpha* and ALE. This broad­ened con­cep­tion of progress in AI should lead to novel ways of mea­sur­ing suc­cess in AI, and can help set mile­stones for fu­ture progress.”

  • Sankalp Bhat­na­gar, Anna Alexan­drova, Sha­har Avin, Stephen Cave, Lucy Cheke, Matthew Crosby, Jan Fey­ereisl, Marta Hal­ina, Bao Sheng Loe, Seán Ó hÉigeartaigh, Fer­nando Martínez-Plumed, Huw Price, Henry Shevlin, Adrian Wel­ler, Alan Win­field, José Hernán­dez-Orallo. (2018). Map­ping In­tel­li­gence: Re­quire­ments and Pos­si­bil­ities. In:Müller V. (eds) Philos­o­phy and The­ory of Ar­tifi­cial In­tel­li­gence 2017. PT-AI 2017. Stud­ies in Ap­plied Philos­o­phy, Episte­mol­ogy and Ra­tional Ethics, vol 44. Springer, Cham.

“New types of ar­tifi­cial in­tel­li­gence (AI), from cog­ni­tive as­sis­tants to so­cial robots, are challeng­ing mean­ingful com­par­i­son with other kinds of in­tel­li­gence. How can such in­tel­li­gent sys­tems be cat­a­logued, eval­u­ated, and con­trasted, with rep­re­sen­ta­tions and pro­jec­tions that offer mean­ingful in­sights? To catalyse the re­search in AI and the fu­ture of cog­ni­tion, we pre­sent the mo­ti­va­tion, re­quire­ments and pos­si­bil­ities for an at­las of in­tel­li­gence: an in­te­grated frame­work and col­lab­o­ra­tive open repos­i­tory for col­lect­ing and ex­hibit­ing in­for­ma­tion of all kinds of in­tel­li­gence, in­clud­ing hu­mans, non-hu­man an­i­mals, AI sys­tems, hy­brids and col­lec­tives thereof. After pre­sent­ing this ini­ti­a­tive, we re­view re­lated efforts and pre­sent the re­quire­ments of such a frame­work. We sur­vey ex­ist­ing vi­su­al­i­sa­tions and rep­re­sen­ta­tions, and dis­cuss which crite­ria of in­clu­sion should be used to con­figure an at­las of in­tel­li­gence.”