CSER and FHI advice to UN High-level Panel on Digital Cooperation

Link post

Re­searchers from Cam­bridge Univer­sity’s Cen­tre for the Study of Ex­is­ten­tial Risk and Oxford Univer­sity’s Cen­ter for the Gover­nance of AI at the Fu­ture of Hu­man­ity In­sti­tute sub­mit­ted ad­vice to the UN Sec­re­tary-Gen­eral’s High-level Panel on Digi­tal Co­op­er­a­tion.

The High-level Panel on Digi­tal Co­op­er­a­tion was es­tab­lished by the UN Sec­re­tary-Gen­eral in July 2018 to iden­tify good ex­am­ples and pro­pose modal­ities for work­ing co­op­er­a­tively across sec­tors, dis­ci­plines and bor­ders to ad­dress challenges in the digi­tal age. It is cochaired by Melinda Gates and Jack Ma.

The full sub­mis­sion is be­low.

READ SUBMISSION AS A PDF

UN High-level Panel on Digi­tal Co­op­er­a­tion: A Pro­posal for In­ter­na­tional AI Governance

Authors: Dr Luke Kemp1, Peter Cihon2, Matthijs Michiel Maas2, Haydn Belfield1, Dr Seán Ó hÉigeartaigh1, Jade Le­ung2 and Zoe Cre­mer1. (1=CSER, 2=FHI)

Summary

In­ter­na­tional Digi­tal Co­op­er­a­tion must be un­der­pinned by the effec­tive in­ter­na­tional gov­er­nance of ar­tifi­cial in­tel­li­gence (AI). AI sys­tems pose nu­mer­ous trans­bound­ary policy prob­lems in both the short- and the longterm. The in­ter­na­tional gov­er­nance of AI should be an­chored to a regime un­der the UN which is in­clu­sive (of mul­ti­ple stake­hold­ers), an­ti­ci­pa­tory (of fast-pro­gress­ing AI tech­nolo­gies and im­pacts), re­spon­sive (to the rapidly evolv­ing tech­nol­ogy and its uses) and re­flex­ive (crit­i­cally re­views and up­dates its policy prin­ci­ples). We pro­pose some op­tions for the in­ter­na­tional gov­er­nance of AI which could help co­or­di­nate ex­ist­ing in­ter­na­tional law on AI, fore­cast fu­ture de­vel­op­ments, risks and op­por­tu­ni­ties, and fill crit­i­cal gaps in in­ter­na­tional gov­er­nance.

1. Is­sues in Digi­tal Cooperation

Digi­tal co­op­er­a­tion will rise or fall by the use or mi­suse of rapidly de­vel­op­ing ar­tifi­cial in­tel­li­gence (AI) tech­nolo­gies. AI will trans­form in­ter­na­tional so­cial, eco­nomic, and le­gal re­la­tions in ways that spill over far be­yond the digi­tal realm. Digi­tal co­op­er­a­tion on AI is es­sen­tial to help stake­hold­ers build ca­pac­ity for the on­go­ing digi­tal trans­for­ma­tion and to sup­port a safe and in­clu­sive digi­tal fu­ture. Ac­cord­ingly this sub­mis­sion will fo­cus on the in­ter­na­tional gov­er­nance of AI sys­tems.

AI tech­nolo­gies are dual-use. They pre­sent op­por­tu­ni­ties for ad­vance­ments in trans­port, medicine, the tran­si­tion to re­new­able en­ergy and lift­ing stan­dards of liv­ing. Some sys­tems may even be used to strengthen the mon­i­tor­ing and en­force­ment of in­ter­na­tional law and im­prove gov­er­nance. Yet they also have the po­ten­tial to cre­ate sig­nifi­cant harms. Th­ese in­clude labour dis­place­ment, un­pre­dictable weapons sys­tems, strength­ened to­tal­i­tar­i­anism and desta­bi­liz­ing strate­gic shifts in the in­ter­na­tional or­der (Dafoe 2018; Payne 2018). The challenges of AI stem from both ca­pa­bil­ities that already ex­ist, or will be reached in the near-term (within 5 years), as well as from longer-term prospec­tive ca­pa­bil­ities. The two are in­tri­cately in­ter­twined. How we ad­dress the near-term challenges of AI will shape longer-term policy and tech­nol­ogy path­ways (Cave and ÓhÉigeartaigh 2019). Yet the long-term dis­rup­tive im­pacts could dwarf other con­cerns. Both need to be gov­erned in tan­dem.

Challenges from Ex­ist­ing and Near-Term Capabilities

  • Main­tain­ing effec­tive hu­man over­sight in ap­pli­ca­tion of AI to mil­i­tary tech­nol­ogy, de­ci­sion sup­port and in­fras­truc­ture;

  • Al­gorith­mic bias and jus­tice;

  • Al­gorith­mic trans­parency;

  • AI-aided cy­ber­crime;

  • AI-aided cy­ber­war­fare;

  • Safety and reg­u­la­tion of au­tonomous ve­hi­cles;

  • Pri­vacy and surveillance; and,

  • AI-en­abled com­pu­ta­tional pro­pa­ganda.

Challenges from Long-Term Capabilities

  • Wide-spread labour dis­place­ment could heighten wealth in­equal­ities, and fuel do­mes­tic and in­ter­na­tional poli­ti­cal in­sta­bil­ity;

  • Ad­vances in the ap­pli­ca­tion of AI to mil­i­tary tech­nol­ogy could over­turn tac­ti­cal or strate­gic force bal­ances or lead to am­bi­guity over rel­a­tive power, in­creas­ing the chance of strate­gic mis­calcu­la­tion and in­ter­na­tional con­flict;

  • The cre­ation of high-level ma­chine in­tel­li­gence (HLMI). That is, an un­aided AI sys­tem that performs as well as an av­er­age hu­man across most cog­ni­tive skill tests and eco­nom­i­cally rele­vant tasks. If such an HLMI is not value-al­igned with wider so­ciety it could cause catas­trophic dam­age ei­ther by ac­ci­dent or strate­gic mi­suse.

While most of these challenges have not re­ceived suffi­cient at­ten­tion, sev­eral have been mapped in The Mal­i­cious Use of Ar­tifi­cial In­tel­li­gence re­port (Brundage & Avin et al 2018), AI Gover­nance: a Re­search Agenda (Dafoe, 2018), and in the Fu­ture of Life’s (2019) 14 policy challenges. Greater at­ten­tion is needed to fore­cast­ing these po­ten­tial challenges. Both the fore­sight of policy prob­lems and the mag­ni­tude of ex­ist­ing is­sues un­der­line the need for in­ter­na­tional AI gov­er­nance.

2. What Values and Prin­ci­ples Should Un­der­pin Co­op­er­a­tion?

There are already over a dozen sets of prin­ci­ples on AI com­posed by gov­ern­ments, re­searchers, stan­dard-set­ting bod­ies and tech­nol­ogy cor­po­ra­tions (cf. Zeng et al. 2019). Most of these co­a­lesce around key prin­ci­ples of en­sur­ing that AI is used for the com­mon good, does not cause harm or im­p­inge on hu­man rights, and re­spects val­ues such as fair­ness, pri­vacy, and au­ton­omy (Whit­tle­stone et al. 2019). We sug­gest that the High-level Panel on Digi­tal Co­op­er­a­tion com­pile and cat­e­gorise these prin­ci­ples in its syn­the­sis re­port. Im­por­tantly, we need to ex­am­ine trade-offs and ten­sions be­tween the prin­ci­ples to re­fine rules for how they can work in prac­tice. This can in­form fu­ture ne­go­ti­a­tions on cod­ify­ing AI prin­ci­ples.

The in­ter­na­tional gov­er­nance of AI should also draw from le­gal prece­dents un­der the UN. In ad­di­tion to gen­eral prin­ci­ples of in­ter­na­tional law, prin­ci­ples such as the pol­luter pays prin­ci­ple (those who cre­ate ex­ter­nal­ities should pay for the dam­ages and man­age­ment of ex­ter­nal­ities) could be retrofitted from the realm of en­vi­ron­men­tal pro­tec­tion to AI policy. Values from bioethics, such as au­ton­omy, benefi­cence (use for the com­mon good), non-malefi­cence (en­sur­ing AI sys­tems do not cause harm or vi­o­late hu­man rights), and jus­tice are also ap­pli­ca­ble to AI (Beauchamp and Childress 2001; Tad­deo & Floridi 2018). Gover­nance should also be re­spon­sive of ex­ist­ing in­stru­ments of in­ter­na­tional law, and cog­nizant of re­cent reg­u­la­tory steps by in­ter­na­tional reg­u­la­tors on the broader range of global se­cu­rity challenges cre­ated by AI (Kunz & Ó hÉigeartaigh 2019). Fi­nally, while some spe­cial­iza­tion of AI gov­er­nance regimes for dis­tinct do­mains is un­avoid­able, steps should be taken to en­sure these dis­tinct stan­dards or regimes re­in­force rather than clash with each other.

3. Im­prov­ing Co­op­er­a­tion on AI: Op­tions for Global Governance

In­ter­na­tional gov­er­nance of AI should be cen­tred around a ded­i­cated, le­gi­t­i­mate and well-re­sourced regime. This could take nu­mer­ous forms, in­clud­ing a UN spe­cial­ised agency (such as the World Health Or­gani­sa­tion), a Re­lated Or­gani­sa­tion to the UN (such as the World Trade Or­gani­sa­tion) or a sub­sidi­ary body to the UN Gen­eral Assem­bly (such as the UN En­vi­ron­ment Pro­gramme). Any regime on AI should fulfil the fol­low­ing four ob­jec­tives:

  • Co­or­di­na­tion: To co­or­di­nate and catalyse AI-re­lated efforts un­der ex­ist­ing in­ter­na­tional treaties and or­gani­sa­tions (both spe­cial­ized agen­cies and sub­sidi­ary bod­ies);

  • Com­pre­hen­sive Cover­age: To fill ex­tant gaps in in­ter­na­tional gov­er­nance, such as the use of AI-en­abled surveillance tech­nolo­gies, cy­ber­war­fare and the use of AI in de­ci­sion-mak­ing;

  • Co­op­er­a­tion over Com­pe­ti­tion: To en­courage in­ter­na­tional co­op­er­a­tion and col­lab­o­ra­tion be­tween AI groups on pro­jects for the pub­lic good;

  • Col­lec­tive Benefit: To en­sure benev­olent, re­spon­si­ble de­vel­op­ment of AI tech­nolo­gies and the equitable dis­tri­bu­tion of benefits.

The Panel should con­sider the fol­low­ing op­tions as com­po­nents for an in­ter­na­tional regime:

  • A Co­or­di­na­tor and Catalyser of In­ter­na­tional AI Law: there is already a tapestry of in­ter­na­tional reg­u­la­tions on AI be­ing de­vel­oped, in­clud­ing through the In­ter­na­tional Mar­i­time Or­gani­sa­tion (IMO), the Vienna Con­ven­tion on Road Traf­fic, and the Coun­cil of Europe (such as the Bu­dapest Cy­ber­crime Con­ven­tion and the Au­to­matic Pro­cess­ing Con­ven­tion). How­ever, many of these ini­ti­a­tives are frag­mented in mem­ber­ship and func­tions. We wel­come the re­cent efforts of UN Sys­tem Chief Ex­ec­u­tives Board for Co­or­di­na­tion through the High-Level Com­mit­tee on Pro­grammes to draft a sys­tem-wide AI en­gage­ment strat­egy. This should be strength­ened. More­over, other av­enues could be con­sid­ered. For ex­am­ple, the cre­ation of a co­or­di­na­tor for ex­ist­ing efforts to gov­ern AI and catalyse mul­ti­lat­eral treaties and ar­range­ments for ne­glected is­sues. This would fol­low the prece­dent of the United Na­tions En­vi­ron­ment Pro­gramme (UNEP) in syn­chro­niz­ing in­ter­na­tional agree­ments on the en­vi­ron­ment and fa­cil­i­tat­ing new ones such as the 1985 Vienna Con­ven­tion for the Pro­tec­tion of the Ozone Layer. New in­sti­tu­tions could be also brought to­gether un­der an um­brella body, as the 1994 World Trade Or­gani­sa­tion (WTO) has done for trade agree­ments.

  • An In­ter­gov­ern­men­tal Panel on AI (IPAI): There is a dire need for mea­sur­ing and fore­cast­ing the progress and im­pacts of AI sys­tems. This could in­clude ex­am­in­ing the fu­ture ca­pa­bil­ities of AI across a range of cog­ni­tive do­main and eco­nomic tasks, stock­tak­ing how al­gorithms are used in de­ci­sion­mak­ing, analysing emerg­ing tech­niques and tech­nolo­gies and ex­plor­ing po­ten­tial fu­ture im­pacts, such as on em­ploy­ment. An IPAI could provide a le­gi­t­i­mate, au­thor­i­ta­tive voice on the state and trends of AI tech­nolo­gies. We wel­come the joint Cana­dian and French In­ter­na­tional Panel on AI. How­ever, how it draws on ex­per­tise and ac­cesses in­for­ma­tion needs care­ful de­sign. If it proves suc­cess­ful it should even­tu­ally be ex­panded to be­come truly in­ter­gov­ern­men­tal and en­com­pass miss­ing is­sues such as weapons con­trol and AI. The IPAI could in­form in­ter­na­tional gov­er­nance and perform as­sess­ments ev­ery three years as well as quick re­sponse spe­cial is­sue as­sess­ments.

  • A UN AI Re­search Or­gani­sa­tion (UNAIRO): This or­gani­sa­tion would op­er­ate from a pool of gov­ern­ment fund­ing. This UNAIRO could fo­cus on build­ing AI tech­nolo­gies in the pub­lic in­ter­est, in­clud­ing to help meet in­ter­na­tional tar­gets such as the 2015 Sus­tain­able Devel­op­ment Goals (SDGs) as called for by the 2018 UN Sec­re­tary-Gen­eral’s Strat­egy on New Tech­nolo­gies (Guter­res, 2018). A sec­ondary goal could be to con­duct ba­sic re­search on im­prov­ing AI tech­niques in the safest, care­ful an­d
    respon­si­ble en­vi­ron­ment pos­si­ble. The goal would be to chan­nel AI tal­ent to­wards co­op­er­a­tion in cre­at­ing tech­nolo­gies for global benefit.

The out­lined op­tions for a regime should be an­ti­ci­pa­tory, re­flex­ive, re­spon­sive and in­clu­sive. This ad­heres to the key tenets of Re­spon­si­ble Re­search and In­no­va­tion sug­gested by schol­ars (Stil­goe et al 2013). To be in­clu­sive we sug­gest fol­low­ing the ILO’s in­no­va­tive model of mul­ti­par­tite rep­re­sen­ta­tion and vot­ing. In this case vot­ing rights could be dis­tributed to na­tion states as well as other crit­i­cal stake­holder group rep­re­sen­ta­tives. An abil­ity to an­ti­ci­pate emerg­ing challenges and re­spond to the quickly evolv­ing tech­nolog­i­cal land­scape would be en­abled by the IPAI. Re­spon­sive­ness could be built into the body by hav­ing prin­ci­ples on AI re­viewed and up­dated ev­ery three years. This would en­sure that poli­cies re­flect the lat­est and in-coun­try ex­pe­riences.

With pru­dent ac­tion and fore­sight, the UN can help en­sure that AI tech­nolo­gies are de­vel­oped co­op­er­a­tively for the global good.

References

Beauchamp, T. and Childress, J. (2001). Prin­ci­ples of biomed­i­cal ethics. Oxford Univer­sity Press, USA.
Brundage, M. and Avin, S. et al. (2018). The Mal­i­cious Use of Ar­tifi­cial In­tel­li­gence: Fore­cast­ing, Preven­tion, and Miti­ga­tion. Fu­ture of Hu­man­ity In­sti­tute and the Cen­tre for the Study of Ex­is­ten­tial Risk.
Cave, S. and ÓhÉigeartaigh, S. (2019). An AI Race for Strate­gic Ad­van­tage: Rhetoric and Risks. In AAAI/​ACM
Con­fer­ence on Ar­tifi­cial In­tel­li­gence, Ethics and So­ciety.
Cave, S. and ÓhÉigeartaigh, S. (2019). Bridg­ing near- and long-term con­cerns about AI. Na­ture Ma­chine In­tel­li­gence, 1: 5-6
Dafoe, A. (2018). AI Gover­nance: A Re­search Agenda. Fu­ture of Hu­man­ity In­sti­tute, Oxford Univer­sity.
Guter­res, An­tónio. “UN Sec­re­tary-Gen­eral’s Strat­egy on New Tech­nolo­gies.” United Na­tions, Septem­ber 2018. http://​​www.un.org/​​en/​​newtech­nolo­gies/​​images/​​pdf/​​SGs-Strat­egy-on-New-Tech­nolo­gies.pdf.
Kunz, Martina, and Seán Ó hÉigeartaigh. “Ar­tifi­cial In­tel­li­gence and Robo­ti­za­tion.” In Oxford Hand­book on the In­ter­na­tional Law of Global Se­cu­rity (Forth­com­ing), ed­ited by Robin Geiss and Nils Melzer. Oxford Univer­sity Press.
Payne, K. (2018). Ar­tifi­cial In­tel­li­gence: A Revolu­tion in Strate­gic Af­fairs? IISS.
Stil­goe, J., Owen, R. and Mac­naghten, P. (2013). Devel­op­ing a Frame­work for Re­spon­si­ble In­no­va­tion. Re­search Policy, 42(9): 1568-1580
Tad­deo, Mari­arosaria, and Lu­ci­ano Floridi. “How AI Can Be a Force for Good.” Science 361, no. 6404 (Au­gust 24, 2018): 751–52. https://​doi.org/​10.1126/​sci­ence.aat5991.
Whit­tle­stone, J., Nyrup, R., Alexan­drova, A. and Cave, S. (2019). The Role and Limits of Prin­ci­ples in AI Ethics: Towards a Fo­cus on Ten­sions. Pro­ceed­ings of the 2nd AAAI/​ACM Con­fer­ence on AI, Ethics, and So­ciety. AAAI and ACM Digi­tal Libraries.
Zeng, Yi, En­meng Lu, and Cun­qing Huangfu. “Link­ing Ar­tifi­cial In­tel­li­gence Prin­ci­ples.” Pro­ceed­ings of the AAAI Work­shop on Ar­tifi­cial In­tel­li­gence Safety (AAAI-Safe AI 2019), 2019. http://​arxiv.org/​abs/​1812.04814.