This post and CSER’s other advice post made me wonder how well one can gauge the effect of providing guidance to large governmental bodies.
For these or any past submissions, have you been able to gather evidence for how much CSER’s advice mattered to an entire panel (or even just one member of a panel who took it especially seriously)?
Another question: Are any organizations providing advice to these panels that directly contradicts CSER’s advice, or that seems to push in a bad or unimportant direction? It’s hard to tell how much of this is “commonsense things everyone agrees on that just need more attention” vs. “controversial measures to address problems some people either don’t believe in or think should be handled differently”.
(Edit: Disclosure: I am executive director of CSER)
Re: your second question, I don’t personally have a good answer re: bad advice—as these get hundreds of submissions I haven’t read all or even most. (I do recall seeing some that have dismissed or ridiculed AI xrisk as conceptually nonsensical.)
Submissions I’ve been involved in tend towards (a) summarising already published work (b) making sensible, noncontroversial recommendations (c) occasionally gently keeping the overton window open (e.g. ‘many AI experts think AGI is plausible at some point in the future, but on a very uncertain timeline; we should take safe development seriously and there is good work that can be done, and that is being done, at present on technical AI safety’, as opposed to ‘AI xrisk is real, scary and imminent)‘. The aim of (c) being typically to counterweigh the ‘AI safety/alignment is nonsense and everyone working on it is deluded’ view rather than to promote action.
There are a few reasons for this. These open calls for evidence are noisy processes, and not the best way to influence policy on controversial topics or in very concrete ways. However, producing reputable input for them is a good way to get established as a reputable, trustworthy expertise source and partner. In particular, my impression is that it allows people in government, including those already concerned with these issues, greater scope to engage with orgs like ours in more in-depth conversation and analysis (more appropriate for the ‘controversial/concrete action-relevant’ engagement). It’s easier to justify investing time and resources in an org that’s been favourably featured in these processes as opposed to ‘random centre somewhere working on slightly unusual topics’. But it can be hard to disentangle exactly how much these submissions play a role, versus Cambridge/Oxford ‘brand’, track record of academic success and publications, 1-1 meetings with policymakers that would have happened anyway, etc.
(Edit: Disclosure: I am executive director of CSER)
Thanks for good questions. These 2 submissions are very recent, so little time to demonstrate follow-on influence/impact. Some evidence on this and previous submissions that indicate work was likely well-received/influential:
The CSER/GovAI researchers’ input to UN was one of a small subset chosen to present at a ‘virtual town hall’ organised by the UN Panel (108 submissions; 6 presented).
House of Lords AI call (2017/2018): CSER/CFI submissions to the House of Lords AI call for evidence was favourably received. We were subsequently contacted to ask for more input on specific questions (including existential risk, AI safety, horizon-scanning). The committee requested visit to Cambridge to hear presentations and discuss further. They organised 3 such visits; the other 2 being to DeepMind and the BBC. Again, this is represents visits to a small subset of groups/individuals who participated; there were 223 submissions (although there were also an additional 22 oral presentations to this committee, including one from Nick Bostrom). We received informal feedback that the submissions were influential, including material being prominently displayed in presentations during committee meetings. Work from CSER and partners, including the Malicious Use of AI report, is referenced in the subsequent House of Lords Report.
House of Commons AI call (2016): There was a joint CSER/FHI submission, as well as an individual submission from a senior CSER/CFI scholar. Both resulted in invites to present evidence in Parliament (again, only extended to a small subset, though I don’t have the numbers to hand). The individual submission, from then-CSER Academic director Huw Price, made 1 principal recommendation: “What the UK government can most usefully add to this mix, in my view, is a standing body of some kind, to play a monitoring, consultative and coordinating role for the foreseeable future… I recommend that the Committee propose the creation of a standing body under the purview of the Government Chief Scientific Adviser, charged with the task of ensuring continuing collaboration between technologists, academic groups including the Academies, and policy-makers, to monitor and advise on the longterm future of AI.” While it’s hard to prove influence definitively, the Committee followed up with the specific recommendation: “We recommend that a standing Commissionon Artificial Intelligence be established, based at the Alan TuringInstitute, to examine the social, ethical and legal implications ofrecent and potential developments in AI. It should focus onestablishing principles to govern the development and application of AItechniques, as well as advising the Government of any regulationrequired on limits to its progression” https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/896/89602.htm. This was subsequently followed by the establishment of the Centre for Data Ethics and Innovation, which has a senior CSER/CFI member on the board, and has a not-dissimilar structure and remit: “The Centre for Data Ethics and Innovation (CDEI) is an advisory body set up by Government and led by an independent board of expert members to investigate and advise on how we maximise the benefits of data-enabled technologies, including artificial intelligence (AI).” https://www.gov.uk/government/groups/centre-for-data-ethics-and-innovation-cdei
There have been various other followups and engagement with government that I’m less able to write openly about; these include meetings with policymakers and civil servants; a series of joint workshops with a relevant government department on topics relating to the Malicious Use report and other CSER work; and a planned workshop with CDEI.
Thanks for both of these answers! I’m pleasantly surprised by the strength and clarity of the positive feedback (even if some of it may result from the Cambridge name, as you speculated). I’m also surprised at the sheer number of submissions to these groups, and glad to see that CSER’s material stands out.
Most of our submissions are in collaboration with other leading scholars/organisations, e.g. FHI/GovAI and CFI, so credit should rightly be shared. (We tend to coordinate with other leading orgs/scholars when considering a submission, which often naturally leads to joint submission).
These are good questions, thanks Aaron. A quick placeholder to say that I’ll give an answer (from my personal perspective) tomorrow. (Haydn may also have comments on, and evidence relating to, this).
This post and CSER’s other advice post made me wonder how well one can gauge the effect of providing guidance to large governmental bodies.
For these or any past submissions, have you been able to gather evidence for how much CSER’s advice mattered to an entire panel (or even just one member of a panel who took it especially seriously)?
Another question: Are any organizations providing advice to these panels that directly contradicts CSER’s advice, or that seems to push in a bad or unimportant direction? It’s hard to tell how much of this is “commonsense things everyone agrees on that just need more attention” vs. “controversial measures to address problems some people either don’t believe in or think should be handled differently”.
(Edit: Disclosure: I am executive director of CSER)
Re: your second question, I don’t personally have a good answer re: bad advice—as these get hundreds of submissions I haven’t read all or even most. (I do recall seeing some that have dismissed or ridiculed AI xrisk as conceptually nonsensical.)
Submissions I’ve been involved in tend towards (a) summarising already published work (b) making sensible, noncontroversial recommendations (c) occasionally gently keeping the overton window open (e.g. ‘many AI experts think AGI is plausible at some point in the future, but on a very uncertain timeline; we should take safe development seriously and there is good work that can be done, and that is being done, at present on technical AI safety’, as opposed to ‘AI xrisk is real, scary and imminent)‘. The aim of (c) being typically to counterweigh the ‘AI safety/alignment is nonsense and everyone working on it is deluded’ view rather than to promote action.
There are a few reasons for this. These open calls for evidence are noisy processes, and not the best way to influence policy on controversial topics or in very concrete ways. However, producing reputable input for them is a good way to get established as a reputable, trustworthy expertise source and partner. In particular, my impression is that it allows people in government, including those already concerned with these issues, greater scope to engage with orgs like ours in more in-depth conversation and analysis (more appropriate for the ‘controversial/concrete action-relevant’ engagement). It’s easier to justify investing time and resources in an org that’s been favourably featured in these processes as opposed to ‘random centre somewhere working on slightly unusual topics’. But it can be hard to disentangle exactly how much these submissions play a role, versus Cambridge/Oxford ‘brand’, track record of academic success and publications, 1-1 meetings with policymakers that would have happened anyway, etc.
(Edit: Disclosure: I am executive director of CSER)
Thanks for good questions. These 2 submissions are very recent, so little time to demonstrate follow-on influence/impact. Some evidence on this and previous submissions that indicate work was likely well-received/influential:
The CSER/GovAI researchers’ input to UN was one of a small subset chosen to present at a ‘virtual town hall’ organised by the UN Panel (108 submissions; 6 presented).
House of Lords AI call (2017/2018): CSER/CFI submissions to the House of Lords AI call for evidence was favourably received. We were subsequently contacted to ask for more input on specific questions (including existential risk, AI safety, horizon-scanning). The committee requested visit to Cambridge to hear presentations and discuss further. They organised 3 such visits; the other 2 being to DeepMind and the BBC. Again, this is represents visits to a small subset of groups/individuals who participated; there were 223 submissions (although there were also an additional 22 oral presentations to this committee, including one from Nick Bostrom). We received informal feedback that the submissions were influential, including material being prominently displayed in presentations during committee meetings. Work from CSER and partners, including the Malicious Use of AI report, is referenced in the subsequent House of Lords Report.
House of Commons AI call (2016): There was a joint CSER/FHI submission, as well as an individual submission from a senior CSER/CFI scholar. Both resulted in invites to present evidence in Parliament (again, only extended to a small subset, though I don’t have the numbers to hand). The individual submission, from then-CSER Academic director Huw Price, made 1 principal recommendation: “What the UK government can most usefully add to this mix, in my view, is a standing body of some kind, to play a monitoring, consultative and coordinating role for the foreseeable future… I recommend that the Committee propose the creation of a standing body under the purview of the Government Chief Scientific Adviser, charged with the task of ensuring continuing collaboration between technologists, academic groups including the Academies, and policy-makers, to monitor and advise on the longterm future of AI.” While it’s hard to prove influence definitively, the Committee followed up with the specific recommendation: “We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression” https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/896/89602.htm. This was subsequently followed by the establishment of the Centre for Data Ethics and Innovation, which has a senior CSER/CFI member on the board, and has a not-dissimilar structure and remit: “The Centre for Data Ethics and Innovation (CDEI) is an advisory body set up by Government and led by an independent board of expert members to investigate and advise on how we maximise the benefits of data-enabled technologies, including artificial intelligence (AI).” https://www.gov.uk/government/groups/centre-for-data-ethics-and-innovation-cdei
There have been various other followups and engagement with government that I’m less able to write openly about; these include meetings with policymakers and civil servants; a series of joint workshops with a relevant government department on topics relating to the Malicious Use report and other CSER work; and a planned workshop with CDEI.
Thanks for both of these answers! I’m pleasantly surprised by the strength and clarity of the positive feedback (even if some of it may result from the Cambridge name, as you speculated). I’m also surprised at the sheer number of submissions to these groups, and glad to see that CSER’s material stands out.
Thanks Aaron!
Most of our submissions are in collaboration with other leading scholars/organisations, e.g. FHI/GovAI and CFI, so credit should rightly be shared. (We tend to coordinate with other leading orgs/scholars when considering a submission, which often naturally leads to joint submission).
These are good questions, thanks Aaron. A quick placeholder to say that I’ll give an answer (from my personal perspective) tomorrow. (Haydn may also have comments on, and evidence relating to, this).