Deliberative Polling-A Better Way to Gauge Public Opinion on AGI

What does the pub­lic think about AGI? AI-Align­ment? Not much, prob­a­bly. Or per­haps they have heard a speech from Elon Musk and think AGI = God, or some­thing similar. Peo­ple are quite un­in­formed about AGI, even if they have some un­der­stand­ing of AI in terms of its po­ten­tial for au­toma­tion or ad­vance­ment in robotics. This is likely just ra­tio­nal ig­no­rance, and with the myr­iad ways our at­ten­tion is be­ing di­rected and di­verted to­day, in my view it’s un­re­al­is­tic to ask ev­ery cit­i­zen to be­come well-in­formed about each po­ten­tial x-risk.

With the vast po­ten­tial con­se­quences of AGI, how­ever, it is clearly wrong to as­sume that the pub­lic is dis­in­ter­ested in the out­comes. Fur­ther­more, it seems that promi­nent re­searchers want to en­gage more with the pub­lic on the topic. This is part of the stated mis­sion of or­ga­ni­za­tions such as the Part­ner­ship on AI, OpenAI and the Fu­ture of Hu­man­ity In­sti­tute. A poll of re­searchers at the Hu­man-Level AI Con­fer­ence found that they were in­ter­ested in ask­ing the pub­lic ques­tions a range of ques­tions from “What re­spon­si­bil­ities should we never trans­fer to ma­chines?” to “What prob­lem should hu­man­ity solve first us­ing AI?”, but that there was no rep­utable poll to refer­ence. GoodAI Sur­vey: https://​​medium.com/​​goodai-news/​​shap­ing-a-global-sur­vey-on-agi-562ee7baa983.

Public pol­ling has been done by the Cen­ter for the Gover­nance of AI on AI in gen­eral, rather than gen­eral AI. Link here: https://​​gov­er­nanceai.github.io/​​US-Public-Opinion-Re­port-Jan-2019/​​ex­ec­u­tive-sum­mary.html. The find­ings in­di­cate a few in­ter­est­ing phe­nom­ena. First, about 38-40% of those pol­led, even af­ter be­ing given some in­for­ma­tion about AI and it’s im­pli­ca­tions, ei­ther didn’t know what their opinion on the topic was, or nei­ther sup­ported nor op­posed AI de­vel­op­ment. Se­cond, there were large de­mo­graphic dis­crep­an­cies, on the di­men­sions of gen­der, ed­u­ca­tion and in­come es­pe­cially. Fi­nally, the per­ceived risk of AI was deemed to be of the low­est like­li­hood and im­pact of the 15 global risks the par­ti­ci­pants were asked about.

I see the po­ten­tial for im­prove­ment on the FHI pol­ling by uti­liz­ing the De­liber­a­tive Pol­ling method­ol­ogy used at Stan­ford’s Cen­ter for De­liber­a­tive Democ­racy. More info at: https://​​cdd.stan­ford.edu/​​. De­liber­a­tive Pol­ling se­lects ran­dom, rep­re­sen­ta­tive sam­ples from the pub­lic (im­por­tant for cap­tur­ing the clear differ­ence in opinion based on de­mo­graph­ics) and gives them a ques­tion­naire about the topic. A sub­set of the re­spon­dents is cho­sen to par­ti­ci­pate in the de­liber­a­tion ses­sions, and are briefed with ma­te­ri­als on the topic. They then en­gage in small group dis­cus­sions with trained ex­perts, where par­ti­ci­pants can pose a set of ques­tions to the ex­perts. The ses­sion cul­mi­nates with a ques­tion­naire to de­ter­mine how opinion has changed (and his­tor­i­cally, this change in opinion has been vast). Fi­nally, the re­sults of the polls are re­leased to me­dia out­lets.

De­liber­a­tive Pol­ling an­swers the ques­tion “How would the pub­lic deal with an is­sue if they were well ed­u­cated about it?”. I think this is key for AGI con­sid­er­a­tions, since giv­ing only a small in­tro­duc­tion to the topic for par­ti­ci­pants doesn’t al­low them to think about the full range of pos­si­ble out­comes for them­selves and oth­ers. Re­sults from De­liber­a­tive Pol­ling ses­sions could also bet­ter guide fur­ther pol­ling and re­search, and spark pub­lic de­bate through pub­li­ca­tion of the re­sults of the fi­nal ques­tion­naire. Most im­por­tantly, per­haps, ex­perts who are de­sign­ing the val­ues of AI sys­tems can also un­der­stand where ‘reg­u­lar peo­ple’ stand on these val­ues once they un­der­stand the situ­a­tion we are faced with. Win-win.