Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019

We have just pre­pared a Six Month Re­port for our Man­age­ment Board. This is a pub­lic ver­sion of that Re­port. We send short monthly up­dates in our newslet­ter – sub­scribe here.

READ AS PDF

Con­tents:

  1. Overview

  2. Policy and In­dus­try Engagement

  3. Aca­demic Engagement

  4. Public Engagement

  5. Re­cruit­ment and re­search team

  6. Ex­pert Work­shops and Public Lectures

  7. Up­com­ing activities

  8. Publications

1. Overview

The Cen­tre for the Study of Ex­is­ten­tial Risk (CSER) is an in­ter­dis­ci­plinary re­search cen­tre within the Univer­sity of Cam­bridge ded­i­cated to the study and miti­ga­tion of risks that could lead to civ­i­liza­tional col­lapse or hu­man ex­tinc­tion. We study ex­is­ten­tial risk, de­velop col­lab­o­ra­tive strate­gies to re­duce them, and foster a global com­mu­nity of aca­demics, tech­nol­o­gists and policy-mak­ers work­ing to tackle these risks. Our re­search fo­cuses on Global Catas­trophic Biolog­i­cal Risks, Ex­treme Risks and the Global En­vi­ron­ment, Risks from Ar­tifi­cial In­tel­li­gence, and Manag­ing Ex­treme Tech­nolog­i­cal Risks.

Our last Man­age­ment Board Re­port was in Oc­to­ber 2018. Over the last five months, we have con­tinued to ad­vance ex­is­ten­tial risk re­search and grow the field. High­lights in­clude:

  • Publi­ca­tion of Ex­tremes book, seven pa­pers in venues like Na­ture Ma­chine In­tel­li­gence, and a Spe­cial Is­sue.

  • En­gage­ment with global poli­cy­mak­ers and in­dus­try-lead­ers at con­fer­ences, and in one-on-one meet­ings.

  • An­nounce­ment that Prof. Das­gupta will lead the UK Govern­ment Global Re­view of the Eco­nomics of Bio­di­ver­sity.

  • Sub­mis­sion of ad­vice to key US, UN and EU ad­vi­sory bod­ies.

  • Host­ing of sev­eral ex­pert work­shops, helping us to in­ter alia en­courage lead­ing ma­chine learn­ing re­searchers to pro­duce over 20 AI safety pa­pers.

  • Wel­comed new re­search staff and vis­i­tors.

  • Pro­duced a re­port on busi­ness school rank­ings, con­tribut­ing to the two lead­ing busi­ness school rankers re­view­ing their method­ol­ogy.

  • Public en­gage­ment through me­dia cov­er­age and the ex­hi­bi­tion ‘Ground Zero Earth’.

2. Policy En­gage­ment:

We have had the op­por­tu­nity to speak di­rectly with poli­cy­mak­ers and in­sti­tu­tions across the world who are grap­pling with the difficult and novel challenge of how to un­lock the so­cially benefi­cial as­pects of new tech­nolo­gies while miti­gat­ing their risks. Through ad­vice and dis­cus­sions, we have the op­por­tu­nity to re­frame the policy de­bate and to hope­fully shape the tra­jec­tory of these tech­nolo­gies them­selves.

  • Prof. Sir Partha Das­gupta, the Chair of CSER’s Man­age­ment Board, will lead the UK Govern­ment com­pre­hen­sive global re­view of the link be­tween bio­di­ver­sity and eco­nomic growth. The aim is to “ex­plore ways to en­hance the nat­u­ral en­vi­ron­ment and de­liver pros­per­ity”. The an­nounce­ment was made by the Chan­cel­lor of the Ex­che­quer in the Spring State­ment.

  • Sub­mit­ted ad­vice to the UN High-level Panel on Digi­tal Co­op­er­a­tion (Luke Kemp, Haydn Belfield, Seán Ó hÉigeartaigh, Zoe Cre­mer). CSER and FHI re­searchers laid out the challenges posed by AI and offered some op­tions for the global, in­ter­na­tional gov­er­nance of AI. The Sec­re­tary-Gen­eral es­tab­lished the Panel, which Melinda Gates and Jack Ma co-chair. The Panel chose this ad­vice as one of five from over 150 sub­mis­sions to be high­lighted at a ‘vir­tual town hall’. The ad­vice may in­fluence global policy-mak­ers and help set the agenda. Read Ad­vice.

  • Sub­mit­ted ad­vice to the EU High-Level Ex­pert Group on Ar­tifi­cial In­tel­li­gence. Haydn Belfield and Sha­har Avin re­spond to the Draft Ethics Guidelines for Trust­wor­thy AI, draw­ing at­ten­tion to the recom­men­da­tions in our The Mal­i­cious Use of Ar­tifi­cial In­tel­li­gence re­port. This helped in­fluence the EU’s Ethics Guidelines, af­fect­ing be­havi­our across Europe. Read Ad­vice.

  • The All-Party Par­li­a­men­tary Group for Fu­ture Gen­er­a­tions, set up by Cam­bridge stu­dents men­tored by CSER re­searchers, held an event on Global Pan­demics: Is the UK Pre­pared? in Par­li­a­ment in Novem­ber 2019, con­tin­u­ing our en­gage­ment with UK par­li­a­men­tar­i­ans on ex­is­ten­tial risk top­ics. Speak­ers: Dr Cather­ine Rhodes (CSER), Dr Piers Millett (FHI), Pro­fes­sor David Hey­mann CBE (Lon­don School of Hy­giene and Trop­i­cal Medicine). Re­port here. The APPG has also re­cently hired two Co­or­di­na­tors, Sam Hil­ton and Caroline Bay­lon.

  • Sub­mit­ted ad­vice to the US Govern­ment’s Bureau of In­dus­try and Se­cu­rity on “Re­view of Con­trols on Cer­tain Emerg­ing Tech­nolo­gies” (Sam Weiss Evans). The Bureau is the part of the US gov­ern­ment that con­trols the US ex­port con­trol regime. Read Ad­vice.

  • CSER re­searchers ad­vised the Cen­tre for Data Ethics and In­no­va­tion (the UK’s na­tional AI ad­vi­sory body). This kind of en­gage­ment is cru­cial to en­sur­ing re­search pa­pers ac­tu­ally have an im­pact, and do not just gather dust on the shelf.

  • Sean Ó hÉigeartaigh was one of 50 ex­perts ex­clu­sively in­vited to par­ti­ci­pate in the sec­ond Global AI Gover­nance Fo­rum at the World Govern­ment Sum­mit in Dubai. The Sum­mit is ded­i­cated to shap­ing the fu­ture of gov­ern­ments wor­ld­wide.

  • CSER re­searchers at­tended in­vite-only events on Modern Deter­rence (Ditch­ley Park), and High im­pact bio-threats (Wil­ton Park).

  • At the United Na­tions, CSER re­searchers at­tended the ne­go­ti­a­tions on Lethal Au­tonomous Weapons Sys­tems (LAWS) and the Biolog­i­cal Weapons Con­ven­tion an­nual meet­ing of states par­ties. They also en­gaged with the United Na­tions In­sti­tute for Disar­ma­ment Re­search (UNIDIR).

  • CSER re­searchers con­tinued meet­ings with top UK civil ser­vants as part of the policy fel­lows pro­gram or­ga­nized by the Cen­tre for Science and Policy (CSaP).

3. Aca­demic and In­dus­try En­gage­ment:

As an in­ter­dis­ci­plinary re­search cen­tre within Cam­bridge Univer­sity, we seek to grow the aca­demic field of ex­is­ten­tial risk re­search, so that it re­ceives the rigor­ous and de­tailed at­ten­tion it de­serves. Re­searchers also con­tinued their ex­ten­sive and deep col­lab­o­ra­tion with in­dus­try. Ex­tend­ing our links im­proves our re­search by ex­pos­ing us to the cut­ting edge of in­dus­trial R&D, and helps to nudge pow­er­ful com­pa­nies to­wards more re­spon­si­ble prac­tices.

  • Sev­eral re­searchers par­ti­ci­pated in the Benefi­cial AI Puerto Rico Con­fer­ence, en­gag­ing with in­dus­try and aca­demic lead­ers, and shap­ing the agenda of the AI risk com­mu­nity for the next two years. Sean Ó hÉigeartaigh and Sha­har Avin gave Keynotes. This was the third con­fer­ence or­ganised by the Fu­ture of Life In­sti­tute. The first in 2015 pro­duced a re­search agenda for safe and benefi­cial AI, en­dorsed by thou­sands of re­searchers. The sec­ond in 2017 pro­duced the Asilo­mar AI Prin­ci­ples.

  • Visit­ing re­searchers: Dr Kai Spiek­er­mann from LSE vis­ited Jan­uary-March to work on a pa­per on ‘ir­re­versible losses’; Prof Hiski Haukkala, former for­eign policy Ad­viser to the Fin­nish Pres­i­dent; Dr Si­mona Chiodo and Dr Daniele Chiffi of the €9m Ter­ri­to­rial Frag­ility pro­ject at the Po­litec­nico di Milano.

  • Sean Ó hÉigeartaigh at­tended the Part­ner­ship on AI meet­ing and con­tributed to the cre­ation of sev­eral AI/​AGI safety- and strat­egy-rele­vant pro­ject pro­pos­als with the Safety-crit­i­cal AI work­ing group.

  • Sev­eral CSER re­searchers con­tributed to the mam­moth Eth­i­cally Aligned De­sign, First Edi­tion: A Vi­sion for Pri­ori­tiz­ing Hu­man Well-be­ing with Au­tonomous and In­tel­li­gent Sys­tems. It was pro­duced by IEEE, the world’s largest tech­ni­cal pro­fes­sional or­ga­ni­za­tion. The re­lease cul­mi­nates a three-year, global iter­a­tive pro­cess in­volv­ing thou­sands of ex­perts.

  • Luke Kemp and Sha­har Avin par­ti­ci­pated in a high-level AI and poli­ti­cal se­cu­rity work­shop led by Prof Toni Erskine at the Co­ral Bell School of Asia Pa­cific Af­fairs at the Aus­tralian Na­tional Univer­sity.

  • Sha­har Avin con­tinued run­ning ‘sce­nario ex­er­cises’ ex­plor­ing differ­ent pos­si­ble AI sce­nar­ios. He has run over a dozen so far, with some par­ti­ci­pants from lead­ing AI labs. He aims to ex­plore the realm of pos­si­bil­ities, and ed­u­cate par­ti­ci­pants on some of the challenges ahead.

  • We con­tinued our sup­port for the stu­dent-run Eng­ineer­ing Safe AI read­ing group. The group ex­poses mas­ters and PhD stu­dents to in­ter­est­ing AI safety re­search, so they con­sider ca­reers in that area.

  • Cather­ine Rhodes had sev­eral meet­ings with groups in Wash­ing­ton DC work­ing on global catas­trophic biolog­i­cal risks, gov­er­nance of dual-use re­search in the life sci­ences, and ex­treme tech­nolog­i­cal risk more broadly.

  • Lal­itha Sun­daram is work­ing with South Afri­can groups to boost their ca­pac­ity in low cost viral di­ag­nos­tics.

  • We will part­ner with the Jour­nal of Science Policy & Gover­nance to pro­duce a Spe­cial Is­sue on gov­er­nance for dual-use tech­nolo­gies. This Spe­cial Is­sue will en­courage stu­dents to en­gage with ex­is­ten­tial risk re­search and help us iden­tify fu­ture tal­ent.

4. Public En­gage­ment:

We are able to reach far more peo­ple with our re­search on­line:
14,000 web­site vis­i­tors over the last 90 days.
6,602 newslet­ter sub­scribers, up from 4,863 in Oct 2016.
6,343 Twit­ter fol­low­ers.
2,208 Face­book fol­low­ers.

5. Re­cruit­ment and re­search team

New Post­doc­toral Re­search As­so­ci­ates:

  • Dr Ellen Quigley is work­ing on how to ad­dress cli­mate change and bio­di­ver­sity risks through the in­vest­ment poli­cies and prac­tices of in­sti­tu­tional in­vestors. She was pre­vi­ously a CSER Re­search Affili­ate. She also col­lab­o­rates with the Cen­tre for En­dow­ment As­set Man­age­ment at the Judge Busi­ness School, who jointly fund her work. She re­cently pub­lished the Busi­ness School Rank­ings for the 21st Cen­tu­ryre­port at events in Davos and Shang­hai. Four days later, the Fi­nan­cial Times an­nounced a “com­plete re­view of their method­ol­ogy”, sup­ported by a let­ter in the FT signed by two dozen busi­ness lead­ers.

  • Dr Jess Whit­tle­stone is work­ing on a re­search pro­ject com­bin­ing fore­sight and policy/​ethics for AI, in col­lab­o­ra­tion with the Cen­tre for the Fu­ture of In­tel­li­gence (CFI) where she is a post­doc­toral re­searcher. She is the lead au­thor on a ma­jor new re­port(and pa­per) Eth­i­cal and So­cietal Im­pli­ca­tions of Al­gorithms, Data, and Ar­tifi­cial In­tel­li­gence: A Roadmap for Re­search. It sur­veys the dozen sets of AI prin­ci­ples pro­posed over the last two years, and sug­gests that the next step for the field of AI ethics is to ex­plore the ten­sions that arise as we try to im­ple­ment prin­ci­ples in prac­tice.

Visit­ing re­searchers:

  • Lord Des Browne, UK Sec­re­tary of State for Defence (2006-2008) and Vice Chair of the Nu­clear Threat Ini­ti­a­tive. Lord Browne is in­volved with the new Biose­cu­rity Risk Ini­ti­a­tive at St Catharine’s Col­lege (BioRISC), and will be based at CSER for around a day a week.

  • Phil Tor­res, vis­it­ing March-June. Author of Mo­ral­ity, Fore­sight, and Hu­man Flour­ish­ing (2017) and The End: What Science and Reli­gion Tell Us about the Apoca­lypse (2016). He will work on co-au­thored pa­pers with Si­mon Beard and on a new book.

  • Dr Olaf Corry, vis­it­ing March-Septem­ber. As­so­ci­ate Pro­fes­sor in the Depart­ment of Poli­ti­cal Science, Copen­hagen Univer­sity. With a back­ground in the in­ter­na­tional poli­tics of cli­mate change, he will be re­search­ing so­lar geo­eng­ineer­ing poli­tics.

  • Rumtin Sepasspour, vis­it­ing Spring/​Sum­mer (four months). For­eign Policy Ad­viser in the Aus­tralian Prime Minister’s Office, he will fo­cus on en­hanc­ing CSER re­searchers’ ca­pa­bil­ity to de­velop policy ideas

  • Dr Eva Vi­valt, vis­it­ing June. As­sis­tant Pro­fes­sor (Eco­nomics) at Aus­tralia Na­tional Univer­sity, PI on Y Com­bi­na­tor Re­search’s ba­sic in­come study, and founder of AidGrade, a re­search in­sti­tute that gen­er­ates and syn­the­sizes ev­i­dence in in­ter­na­tional de­vel­op­ment.

6. Ex­pert Work­shops and Public Events:

  • Novem­ber, Jan­uary: Epistemic Se­cu­rity Work­shops (led by Dr Avin). Part of a se­ries of work­shops co-or­ganised with the UK’s Alan Tur­ing In­sti­tute, look­ing at the chang­ing threat land­scape of in­for­ma­tion cam­paigns and pro­pa­ganda, given cur­rent and ex­pected ad­vances in ma­chine learn­ing.

  • Jan­uary: SafeAI 2019 Work­shop (led by Dr Ó hÉigeartaigh and col­leagues) at the As­so­ci­a­tion for the Ad­vance­ment of Ar­tifi­cial In­tel­li­gence’s (AAAI) Con­fer­ence. AAAI is one of the four most im­por­tant AI con­fer­ences globally. Th­ese reg­u­lar work­shops em­bed safety in the wider field, and provide a pub­li­ca­tion venue. The work­shop fea­tured over 20 cut­ting-edge pa­pers in AI safety, and en­couraged lead­ing AI re­searchers to pub­lish on AI safety.

  • Fe­bru­ary-March: Ground Zero Earth Ex­hi­bi­tion. Cu­rated by CSER Re­search Affili­ate Yas­mine Rix, held in col­lab­o­ra­tion with CRASSH. Five artists ex­plored ex­is­ten­tial risk. The ex­hi­bi­tion was held at the Ali­son Richard Build­ing, home to the Poli­tics and In­ter­na­tional Stud­ies Depart­ment. The ex­hi­bi­tion en­gaged aca­demics and the pub­lic in our re­search. The launch event was fea­tured on BBC Ra­dio. It closed with a ‘Rise of the Machines’ short film screen­ing. Read overview.

  • March: Ex­tremes Book Launch. The book, ed­ited by Julius Weitzdörfer and Dun­can Need­ham, draws on the 2017 Dar­win Col­lege Lec­ture Series Julius co-or­ganised. It fea­tures con­tri­bu­tions from Emily Shuck­burgh, Nas­sim Ni­cholas Taleb, David Runci­man, and oth­ers. Read more.

  • March 28-31: Aug­mented In­tel­li­gence Sum­mit. The Sum­mit brought to­gether a multi-dis­ci­plinary group of policy, re­search, and busi­ness lead­ers to imag­ine and in­ter­act with a simu­lated model of a pos­i­tive fu­ture for our global so­ciety, econ­omy, and poli­tics – through the lens of ad­vanced AI. Dr Avin was on the Steer­ing Com­mit­tee, de­liv­ered a Keynote, and ran a sce­nario simu­la­tion. More.

  • 3-5 April: EiM 2: The sec­ond meet­ing on Ethics in Math­e­mat­ics. Dr Mau­rice Chiodo and Dr Piers Bur­sill-Hall from the Fac­ulty of Math­e­mat­ics in Cam­bridge have been spear­head­ing an effort to teach re­spon­si­ble be­havi­our and eth­i­cal aware­ness to math­e­mat­i­ci­ans. CSER sup­ported the work­shop. More.

  • 5-6 April: Tools for build­ing trust in AI de­vel­op­ment (co-led by Sha­har Avin) this two-day work­shop con­vened some of the world’s top ex­perts in AI, se­cu­rity, and policy to sur­vey ex­ist­ing mechanisms for trust-build­ing in AI and de­velop a re­search agenda for de­sign­ing new ones.

7. Up­com­ing activities

Three more books will be pub­lished this year:

  • Fukushima and the Law is ed­ited by Julius Weitzdörfer and Kris­tian Lauta, and draws upon a 2016 work­shop Fukushima – Five Years On, which Julius co-or­ganised.

  • Biolog­i­cal Ex­tinc­tion is ed­ited by Partha Das­gupta, and draws upon the 2017 work­shop with the Vat­i­can’s Pon­tif­i­cal Academy of Sciences he co-or­ganised.

  • Time and the Gen­er­a­tions—pop­u­la­tion ethics for a diminish­ing planet (New York: Columbia Univer­sity Press), by Partha Das­gupta, based on his Ken­neth Ar­row Lec­tures de­liv­ered at Columbia Univer­sity.

Up­com­ing events:

  • 21 May: Lo­cal Govern­ment Cli­mate Fu­tures (led by Si­mon Beard with Anne Miller).

  • 6-7 June: Eval­u­at­ing Ex­treme Tech­nolog­i­cal Risks work­shop (led by Si­mon Beard).

  • 26 June: The Cen­tre for Science and Policy (CSaP) Con­fer­ence. CSER is part­ner­ing on a panel at the con­fer­ence fo­cus­ing on meth­ods and tech­niques for fore­cast­ing ex­treme risks.

  • 26-27 Au­gust: De­ci­sion The­ory & the Fu­ture of Ar­tifi­cial In­tel­li­gence Work­shop (led by Huw Price and Yang Liu). The third work­shop in a se­ries bring­ing to­gether philoso­phers, de­ci­sion the­o­rists, and AI re­searchers to pro­mote re­search at the nexus of de­ci­sion the­ory and AI. Co-or­ganised with the Mu­nich Cen­ter for Math­e­mat­i­cal Philos­o­phy.

Timing to be con­firmed:

  • Sum­mer: The next in the Cam­bridge² work­shop se­ries, co-or­ganised by the MIT-IBM Wat­son AI Lab and CFI.

  • Sum­mer: Cul­ture of Science—Se­cu­rity and Dual Use Work­shop (led by Dr Evans).

  • Sum­mer/​Au­tumn: Biolog­i­cal Ex­tinc­tion sym­po­sium, around the pub­li­ca­tion of Sir Partha’s book.

  • Au­tumn: Hori­zon-Scan­ning work­shop (led by Dr Kemp).

  • April 2020: CSER’s next in­ter­na­tional con­fer­ence: the 2020 Cam­bridge Con­fer­ence on Catas­trophic Risk.

8. Publications

  • Need­ham, D. and Weitzdörfer, J. (Eds). (2019). Ex­tremes. Cam­bridge Univer­sity Press.

    • Hu­man­ity is con­fronted by and at­tracted to ex­tremes. Ex­treme events shape our think­ing, feel­ing, and ac­tions; they echo in our poli­tics, me­dia, liter­a­ture, and sci­ence. We of­ten as­so­ci­ate ex­tremes with crises, dis­asters, and risks to be averted, yet ex­tremes also have the po­ten­tial to lead us to­wards new hori­zons. Fea­tur­ing es­says by lead­ing in­tel­lec­tu­als and pub­lic figures (like Emily Shuck­burgh, Nas­sim Ni­cholas Taleb and David Runci­man) aris­ing from the 2017 Dar­win Col­lege Lec­tures, this vol­ume ex­plores ‘ex­treme’ events.

  • Cave, S. and Ó hÉigeartaigh, S. (2019). Bridg­ing near-and long-term con­cerns about AI. Na­ture Ma­chine In­tel­li­gence 1:5.

    • We were in­vited to con­tribute a pa­per to the first edi­tion of the new Na­ture jour­nal, Na­ture Ma­chine In­tel­li­gence.

    • “De­bate about the im­pacts of AI is of­ten split into two camps, one as­so­ci­ated with the near term and the other with the long term. This di­vide is a mis­take — the con­nec­tions be­tween the two per­spec­tives de­serve more at­ten­tion.”

  • Häg­gström, O. and Rhodes, C. (2019). Spe­cial Is­sue: Ex­is­ten­tial risk to hu­man­ity. Fore­sight.

    • Häg­gström, O. and Rhodes, C. (2019). Guest Edi­to­rial. Fore­sight.

    • “We are not yet at a stage where the study of ex­is­ten­tial risk is es­tab­lished as an aca­demic dis­ci­pline in its own right. At­tempts to move in that di­rec­tion are war­ranted by the im­por­tance of such re­search (con­sid­er­ing the mag­ni­tude of what is at stake). One such at­tempt took place in Gothen­burg, Swe­den, dur­ing the fall of 2017: an in­ter­na­tional guest re­searcher pro­gram on ex­is­ten­tial risk at Chalmers Univer­sity of Tech­nol­ogy and the Univer­sity of Gothen­burg, fea­tur­ing daily sem­i­nars and other re­search ac­tivi­ties over the course of two months, with An­ders Sand­berg serv­ing as sci­en­tific leader of the pro­gram and Olle Häg­gström as chief lo­cal or­ga­nizer, and with par­ti­ci­pants from a broad range of aca­demic dis­ci­plines. The na­ture of this pro­gram brought sub­stan­tial benefits in com­mu­nity build­ing and in build­ing mo­men­tum for fur­ther work in the field: of which the con­tri­bu­tions here are one re­flec­tion. The pre­sent spe­cial is­sue of Fore­sight is de­voted to re­search car­ried out and/​or dis­cussed in de­tail at that pro­gram. All in all, the is­sue col­lects ten pa­pers that have made it through the peer re­view pro­cess.”

  • Beard, S. (2019). What Is Un­fair about Unequal Brute Luck? An In­ter­gen­er­a­tional Puz­zle. Philosophia.

    • “Ac­cord­ing to Luck egal­i­tar­i­ans, fair­ness re­quires us to bring it about that no­body is worse off than oth­ers where this re­sults from brute bad luck, but not where they choose or de­serve to be so. In this pa­per, I con­sider one type of brute bad luck that ap­pears paradig­matic of what a Luck Egal­i­tar­ian ought to be most con­cerned about, namely that suffered by peo­ple who are born to badly off par­ents and are less well off as a re­sult. How­ever, when we con­sider what is sup­pos­edly un­fair about this kind of un­equal brute luck, luck egal­i­tar­i­ans face a dilemma. Ac­cord­ing to the stan­dard ac­count of luck egal­i­tar­i­anism, differ­en­tial brute luck is un­fair be­cause of its effects on the dis­tri­bu­tion of goods. Yet, where some par­ents are worse off be­cause they have cho­sen to be im­pru­dent, it may be im­pos­si­ble to neu­tral­ize these effects with­out cre­at­ing a dis­tri­bu­tion that seems at least as un­fair. This, I ar­gue, is prob­le­matic for luck egal­i­tar­i­anism. I, there­fore, ex­plore two al­ter­na­tive views that can avoid this prob­lem. On the first of these, pro­posed by Shlomi Se­gall, the dis­tri­bu­tional effects of un­equal brute luck are un­fair only when they make a situ­a­tion more un­equal, but not when they make it more equal. On the sec­ond, it is the un­equal brute luck it­self, rather than its dis­tri­bu­tional effects, that is un­fair. I con­clude with some con­sid­er­a­tions in favour of this sec­ond view, while ac­cept­ing that both are valid re­sponses to the prob­lem I de­scribe.”

  • Beard, S. (2019). Perfec­tion­ism and the Repug­nant Con­clu­sion. The Jour­nal of Value In­quiry.

    • “The Repug­nant Con­clu­sion and its para­doxes pose a sig­nifi­cant prob­lem for out­come eval­u­a­tion. Derek Parfit has sug­gested that we may be able to re­solve this prob­lem by ac­cept­ing a view he calls ‘Perfec­tion­ism’, which gives lex­i­cally su­pe­rior value to ‘the best things in life’. In this pa­per, I ex­plore perfec­tion­ism and its po­ten­tial to solve this prob­lem. I ar­gue that perfec­tion­ism pro­vides nei­ther a suffi­cient means of avoid­ing the Repug­nant Con­clu­sion nor a full ex­pla­na­tion of its re­pug­nance. This is be­cause even lives that are ‘barely worth liv­ing’ may con­tain the best things in life if they also con­tain suffi­cient ‘bad things’, such as suffer­ing or frus­tra­tion. There­fore, perfec­tion­ism can only fully ex­plain or avoid the Repug­nant Con­clu­sion if com­bined with other claims, such as that bad things have an asym­met­ri­cal value rel­a­tive to many good things. This com­bined view faces the ob­jec­tion that any such asym­me­try im­plies Parfit’s ‘Ridicu­lous Con­clu­sion’. How­ever, I ar­gue that perfec­tion­ism it­self faces very similar ob­jec­tions, and that these are ques­tion-beg­ging against both views. Fi­nally, I show how the com­bined view that I pro­pose not only ex­plains and avoids the Repug­nant Con­clu­sion but also al­lows us to es­cape many of its para­doxes as well.”

  • Avin, S. (2018). Mav­er­icks and lot­ter­ies. Stud­ies in His­tory and Philos­o­phy of Science Part A.

    • “In 2013 the Health Re­search Coun­cil of New Zealand be­gan a stream of fund­ing en­ti­tled ‘Ex­plorer Grants’, and in 2017 changes were in­tro­duced to the fund­ing mechanisms of the Volk­swa­gen Foun­da­tion ‘Ex­per­i­ment!’ and the New Zealand Science for Tech­nolog­i­cal In­no­va­tion challenge ‘Seed Pro­jects’. All three fund­ing streams aim at en­courag­ing novel sci­en­tific ideas, and all now em­ploy ran­dom se­lec­tion by lot­tery as part of the grant se­lec­tion pro­cess. The idea of fund­ing sci­ence by lot­tery emerged in­de­pen­dently in sev­eral cor­ners of academia, in­clud­ing in philos­o­phy of sci­ence. This pa­per re­views the con­cep­tual and in­sti­tu­tional land­scape in which this policy pro­posal emerged, how differ­ent aca­demic fields pre­sented and sup­ported ar­gu­ments for the pro­posal, and how these have been re­flected (or not) in ac­tual policy. The pa­per pre­sents an an­a­lyt­i­cal syn­the­sis of the ar­gu­ments pre­sented to date, notes how they sup­port each other and shape policy recom­men­da­tions in var­i­ous ways, and where com­pet­ing ar­gu­ments high­light the need for fur­ther anal­y­sis or more data. In ad­di­tion, it pro­vides les­sons for how philoso­phers of sci­ence can en­gage in shap­ing sci­ence policy, and in par­tic­u­lar, high­lights the im­por­tance of mix­ing com­ple­men­tary ex­per­tise: it takes a (con­cep­tu­ally di­verse) village to raise (good) policy.”

  • Avin, S. (2019). Ex­plor­ing ar­tifi­cial in­tel­li­gence fu­tures. Jour­nal of Ar­tifi­cial In­tel­li­gence Hu­man­i­ties Vol.2.

    • “Ar­tifi­cial in­tel­li­gence tech­nolo­gies are re­ceiv­ing high lev­els of at­ten­tion and ‘hype’, lead­ing to a range of spec­u­la­tion about fu­tures in which such tech­nolo­gies, and their suc­ces­sors, are com­monly de­ployed. By look­ing at ex­ist­ing AI fu­tures work, this pa­per sur­veys and offers an ini­tial cat­e­gori­sa­tion of, sev­eral of the tools available for such fu­tures-ex­plo­ra­tion, in par­tic­u­lar those available to hu­man­i­ties schol­ars, and dis­cusses some of the benefits and limi­ta­tions of each. While no tools ex­ist to re­li­ably pre­dict the fu­ture of ar­tifi­cial in­tel­li­gence, sev­eral tools can help us ex­pand our range of pos­si­ble fu­tures in or­der to re­duce un­ex­pected sur­prises, and to cre­ate com­mon lan­guages and mod­els that en­able con­struc­tive con­ver­sa­tions about the kinds of fu­tures we would like to oc­cupy or avoid. The pa­per points at sev­eral tools as par­tic­u­larly promis­ing and cur­rently ne­glected, call­ing for more work in data-driven, re­al­is­tic, in­te­gra­tive, and par­ti­ci­pa­tory sce­nario role-plays.”

  • Lewis, S.C., Perk­ins-Kirk­patrick, S.E, Althor, G., King, A.D., Kemp, L. (2019). Assess­ing con­tri­bu­tions of ma­jor emit­ters’ Paris‐era de­ci­sions to fu­ture tem­per­a­ture ex­tremes. Geo­phys­i­cal Re­search Let­ters.

    • “Tem­per­a­ture ex­tremes can dam­age as­pects of hu­man so­ciety, in­fras­truc­ture, and our ecosys­tems. The fre­quency, sever­ity, and du­ra­tion of high tem­per­a­tures are in­creas­ing in some re­gions and are pro­jected to con­tinue in­creas­ing with fur­ther global tem­per­a­ture in­creases as green­house gas emis­sions rise. While the in­ter­na­tional Paris Agree­ment aims to limit warm­ing through emis­sions re­duc­tion pledges, none of the ma­jor emit­ters has made com­mit­ments that are al­igned with limit­ing warm­ing to 2 °C. In this anal­y­sis, we ex­am­ine the im­pact of the world’s three largest green­house gas emit­ters’ (EU, USA, and China) cur­rent and fu­ture de­ci­sions about car­bon diox­ide emis­sions on the oc­cur­rence of fu­ture ex­treme tem­per­a­tures. We show that fu­ture ex­tremes de­pend on the emis­sions de­ci­sions made by the ma­jor emit­ters. By im­ple­ment­ing stronger cli­mate pledges, ma­jor emit­ters can re­duce the fre­quency of fu­ture ex­tremes and their own calcu­lated con­tri­bu­tions to these tem­per­a­ture ex­tremes.”

  • Her­nan­dez Orallo, J., Martinez-Plumed, F., Avin, S., and ÓhÉigeartaigh, S. (2019). Sur­vey­ing Safety-rele­vant AI Char­ac­ter­is­tics. Pro­ceed­ings of the AAAI Work­shop on Ar­tifi­cial In­tel­li­gence Safety 2019.

    • Shortlisted for Best Paper Prize.

    • “The cur­rent anal­y­sis in the AI safety liter­a­ture usu­ally com­bines a risk or safety is­sue (e.g., in­ter­rupt­ibil­ity) with a par­tic­u­lar paradigm for an AI agent (e.g., re­in­force­ment learn­ing). How­ever, there is cur­rently no sur­vey of safety-rele­vant char­ac­ter­is­tics of AI sys­tems that may re­veal ne­glected ar­eas of re­search or sug­gest to de­vel­op­ers what de­sign choices they could make to avoid or min­imise cer­tain safety con­cerns. In this pa­per, we take a first step to­wards de­liv­er­ing such a sur­vey, from two an­gles. The first fea­tures AI sys­tem char­ac­ter­is­tics that are already known to be rele­vant to safety con­cerns, in­clud­ing in­ter­nal sys­tem char­ac­ter­is­tics, char­ac­ter­is­tics re­lat­ing to the effect of the ex­ter­nal en­vi­ron­ment on the sys­tem, and char­ac­ter­is­tics re­lat­ing to the effect of the sys­tem on the tar­get en­vi­ron­ment. The sec­ond pre­sents a brief sur­vey of a broad range of AI sys­tem char­ac­ter­is­tics that could prove rele­vant to safety re­search, in­clud­ing types of in­ter­ac­tion, com­pu­ta­tion, in­te­gra­tion, an­ti­ci­pa­tion, su­per­vi­sion, mod­ifi­ca­tion, mo­ti­va­tion and achieve­ment. This sur­vey en­ables fur­ther work in ex­plor­ing sys­tem char­ac­ter­is­tics and de­sign choices that af­fect safety con­cerns.”

  • Re­port: Pitt-Wat­ter­son, D. and Quigley, E. (2019). Busi­ness School Rank­ings for the 21st Cen­tury.

    • “This pa­per ad­dresses the ques­tion of how busi­ness schools, and the courses they offer, are eval­u­ated and ranked. The ex­ist­ing bench­mark­ing sys­tems, many of which are ad­ministered by well-re­spected me­dia in­sti­tu­tions, ap­pear to have a strong mo­ti­va­tional effect for ad­minis­tra­tors and prospec­tive stu­dents al­ike. Many of the rank­ings crite­ria cur­rently in use were de­vel­oped years or decades ago, and use sim­ple mea­sures such as salary and salary pro­gres­sion. Less em­pha­sis has been placed on what is taught and learned at the schools. This pa­per ar­gues that, given the in­fluence of the rank­ing pub­li­ca­tions, it is time for a re­view of the way they eval­u­ate busi­ness ed­u­ca­tion. What fol­lows is meant to con­tribute to a fruit­ful on­go­ing dis­cus­sion about the fu­ture of busi­ness schools in our cur­rent cen­tury.”