Will GovAI in its new form continue to deal with the topic of regulation (i.e. regulation of AI companies by states)?
DeepMind is owned by Alphabet (Google). Many interventions that are related to AI regulation can affect the stock price of Alphabet, which Alphabet is legally obligated to try to maximize (regardless of the good intentions that many senior executives there may have). If GovAI will be co-lead by an employee of DeepMind, there is seemingly a severe conflict of interest issue about anything that GovAI does (or avoids doing) with respect to the topic of regulating AI companies.
GovAI’s research agenda (which is currently linked to from their ‘placeholder website’) includes the following:
[...] At what point would and should the state be involved? What are the legal and other tools that the state could employ (or are employing) to close and exert control over AI companies? With what probability, and under what circumstances, could AI research and development be securitized—i.e., treated as a matter of national security—at or before the point that transformative capabilities are developed? How might this happen and what would be the strategic implications? How are particular private companies likely to regard the involvement of their host government, and what policy options are available to them to navigate the process of state influence? [...]
How will this part of the research agenda be influenced by GovAI being co-lead by a DeepMind employee?
Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.
GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.
My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:
We’re not yet decided on how we’ll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:
We’re aiming for a board that does not have a majority of folks from any of: industry, policy, academia.
Allan will be the co-lead of the organisation. We hope to be able to announce others soon.
Whenever someone has a clear conflict of interest regarding a candidate or a piece of research – say we were to publish a ranking of how responsible various AI labs were being – we’ll have the person recuse themselves from the decision.
For context, I expect most folks who collaborate with GovAI to not be directly paid by GovAI. Most folks will be employed elsewhere and not closely line managed by the organization.
FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I’m glad you’re thinking about how to do this.
Related to the concern that I raised here: I recommend interested readers to listen to (or read the transcript of) this FOL podcast episode with Mohamed Abdalla about their paper: “The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity”.