(Edit: Disclosure: I am executive director of CSER)
Thanks for good questions. These 2 submissions are very recent, so little time to demonstrate follow-on influence/impact. Some evidence on this and previous submissions that indicate work was likely well-received/influential:
The CSER/GovAI researchers’ input to UN was one of a small subset chosen to present at a ‘virtual town hall’ organised by the UN Panel (108 submissions; 6 presented).
House of Lords AI call (2017/2018): CSER/CFI submissions to the House of Lords AI call for evidence was favourably received. We were subsequently contacted to ask for more input on specific questions (including existential risk, AI safety, horizon-scanning). The committee requested visit to Cambridge to hear presentations and discuss further. They organised 3 such visits; the other 2 being to DeepMind and the BBC. Again, this is represents visits to a small subset of groups/individuals who participated; there were 223 submissions (although there were also an additional 22 oral presentations to this committee, including one from Nick Bostrom). We received informal feedback that the submissions were influential, including material being prominently displayed in presentations during committee meetings. Work from CSER and partners, including the Malicious Use of AI report, is referenced in the subsequent House of Lords Report.
House of Commons AI call (2016): There was a joint CSER/FHI submission, as well as an individual submission from a senior CSER/CFI scholar. Both resulted in invites to present evidence in Parliament (again, only extended to a small subset, though I don’t have the numbers to hand). The individual submission, from then-CSER Academic director Huw Price, made 1 principal recommendation: “What the UK government can most usefully add to this mix, in my view, is a standing body of some kind, to play a monitoring, consultative and coordinating role for the foreseeable future… I recommend that the Committee propose the creation of a standing body under the purview of the Government Chief Scientific Adviser, charged with the task of ensuring continuing collaboration between technologists, academic groups including the Academies, and policy-makers, to monitor and advise on the longterm future of AI.” While it’s hard to prove influence definitively, the Committee followed up with the specific recommendation: “We recommend that a standing Commissionon Artificial Intelligence be established, based at the Alan TuringInstitute, to examine the social, ethical and legal implications ofrecent and potential developments in AI. It should focus onestablishing principles to govern the development and application of AItechniques, as well as advising the Government of any regulationrequired on limits to its progression” https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/896/89602.htm. This was subsequently followed by the establishment of the Centre for Data Ethics and Innovation, which has a senior CSER/CFI member on the board, and has a not-dissimilar structure and remit: “The Centre for Data Ethics and Innovation (CDEI) is an advisory body set up by Government and led by an independent board of expert members to investigate and advise on how we maximise the benefits of data-enabled technologies, including artificial intelligence (AI).” https://www.gov.uk/government/groups/centre-for-data-ethics-and-innovation-cdei
There have been various other followups and engagement with government that I’m less able to write openly about; these include meetings with policymakers and civil servants; a series of joint workshops with a relevant government department on topics relating to the Malicious Use report and other CSER work; and a planned workshop with CDEI.
Thanks for both of these answers! I’m pleasantly surprised by the strength and clarity of the positive feedback (even if some of it may result from the Cambridge name, as you speculated). I’m also surprised at the sheer number of submissions to these groups, and glad to see that CSER’s material stands out.
Most of our submissions are in collaboration with other leading scholars/organisations, e.g. FHI/GovAI and CFI, so credit should rightly be shared. (We tend to coordinate with other leading orgs/scholars when considering a submission, which often naturally leads to joint submission).
(Edit: Disclosure: I am executive director of CSER)
Thanks for good questions. These 2 submissions are very recent, so little time to demonstrate follow-on influence/impact. Some evidence on this and previous submissions that indicate work was likely well-received/influential:
The CSER/GovAI researchers’ input to UN was one of a small subset chosen to present at a ‘virtual town hall’ organised by the UN Panel (108 submissions; 6 presented).
House of Lords AI call (2017/2018): CSER/CFI submissions to the House of Lords AI call for evidence was favourably received. We were subsequently contacted to ask for more input on specific questions (including existential risk, AI safety, horizon-scanning). The committee requested visit to Cambridge to hear presentations and discuss further. They organised 3 such visits; the other 2 being to DeepMind and the BBC. Again, this is represents visits to a small subset of groups/individuals who participated; there were 223 submissions (although there were also an additional 22 oral presentations to this committee, including one from Nick Bostrom). We received informal feedback that the submissions were influential, including material being prominently displayed in presentations during committee meetings. Work from CSER and partners, including the Malicious Use of AI report, is referenced in the subsequent House of Lords Report.
House of Commons AI call (2016): There was a joint CSER/FHI submission, as well as an individual submission from a senior CSER/CFI scholar. Both resulted in invites to present evidence in Parliament (again, only extended to a small subset, though I don’t have the numbers to hand). The individual submission, from then-CSER Academic director Huw Price, made 1 principal recommendation: “What the UK government can most usefully add to this mix, in my view, is a standing body of some kind, to play a monitoring, consultative and coordinating role for the foreseeable future… I recommend that the Committee propose the creation of a standing body under the purview of the Government Chief Scientific Adviser, charged with the task of ensuring continuing collaboration between technologists, academic groups including the Academies, and policy-makers, to monitor and advise on the longterm future of AI.” While it’s hard to prove influence definitively, the Committee followed up with the specific recommendation: “We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression” https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/896/89602.htm. This was subsequently followed by the establishment of the Centre for Data Ethics and Innovation, which has a senior CSER/CFI member on the board, and has a not-dissimilar structure and remit: “The Centre for Data Ethics and Innovation (CDEI) is an advisory body set up by Government and led by an independent board of expert members to investigate and advise on how we maximise the benefits of data-enabled technologies, including artificial intelligence (AI).” https://www.gov.uk/government/groups/centre-for-data-ethics-and-innovation-cdei
There have been various other followups and engagement with government that I’m less able to write openly about; these include meetings with policymakers and civil servants; a series of joint workshops with a relevant government department on topics relating to the Malicious Use report and other CSER work; and a planned workshop with CDEI.
Thanks for both of these answers! I’m pleasantly surprised by the strength and clarity of the positive feedback (even if some of it may result from the Cambridge name, as you speculated). I’m also surprised at the sheer number of submissions to these groups, and glad to see that CSER’s material stands out.
Thanks Aaron!
Most of our submissions are in collaboration with other leading scholars/organisations, e.g. FHI/GovAI and CFI, so credit should rightly be shared. (We tend to coordinate with other leading orgs/scholars when considering a submission, which often naturally leads to joint submission).