Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.
GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.
My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:
We’re not yet decided on how we’ll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:
We’re aiming for a board that does not have a majority of folks from any of: industry, policy, academia.
Allan will be the co-lead of the organisation. We hope to be able to announce others soon.
Whenever someone has a clear conflict of interest regarding a candidate or a piece of research – say we were to publish a ranking of how responsible various AI labs were being – we’ll have the person recuse themselves from the decision.
For context, I expect most folks who collaborate with GovAI to not be directly paid by GovAI. Most folks will be employed elsewhere and not closely line managed by the organization.
FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I’m glad you’re thinking about how to do this.
We’ve now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I’d be curious if folks have thoughts, in particular @ofer.
GovAI is funded by philanthropic organizations. So far, we have received funding from Open Philanthropy, the Center for Emerging Risk Research, and Effective Altruism Funds (a project of the Centre for Effective Altruism).
This is great! I hope GovAI will maintain this transparency about its funding sources, and publish a policy to that effect.
We do not currently accept funding from private companies.
I think it would be beneficial to have a policy that prevents such funding in the future as well. (There could be conflict of interest issues due to the mere possibility of receiving future funding from certain companies.)
(Also, I take it that “private” here means private sector; i.e. this statement applies to public companies as well?)
We will not accept donations that we believe might compromise the independence or accuracy of our work.
Great, this seems super important! Maybe there should be a policy that allows funding from a non-EA source only if all the board members approve it.
In many potential future situations it won’t be obvious whether certain funding might compromise the independence or accuracy of GovAI’s work; and one’s judgment about it will be subjective and could easily be influenced by biases (and it could be very tempting to accept the funding).
Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.
GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.
My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:
https://www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf
https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/AI-Policy-Levers-A-Review-of-the-U.S.-Governments-tools-to-shape-AI-research-development-and-deployment-–-Fischer-et-al.pdf
https://www.fhi.ox.ac.uk/wp-content/uploads/How-Will-National-Security-Considerations-Affect-Antitrust-Decisions-in-AI-Cullen-OKeefe.pdf
https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-working-paper-Who-owns-AI-Apr2020.pdf
https://ora.ox.ac.uk/objects/uuid:ea3c7cb8-2464-45f1-a47c-c7b568f27665
https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf
https://www.fhi.ox.ac.uk/wp-content/uploads/EU-White-Paper-Consultation-Submission-GovAI-Oxford.pdf
We’re not yet decided on how we’ll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:
We’re aiming for a board that does not have a majority of folks from any of: industry, policy, academia.
Allan will be the co-lead of the organisation. We hope to be able to announce others soon.
Whenever someone has a clear conflict of interest regarding a candidate or a piece of research – say we were to publish a ranking of how responsible various AI labs were being – we’ll have the person recuse themselves from the decision.
For context, I expect most folks who collaborate with GovAI to not be directly paid by GovAI. Most folks will be employed elsewhere and not closely line managed by the organization.
FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I’m glad you’re thinking about how to do this.
We’ve now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I’d be curious if folks have thoughts, in particular @ofer.
This is great! I hope GovAI will maintain this transparency about its funding sources, and publish a policy to that effect.
I think it would be beneficial to have a policy that prevents such funding in the future as well. (There could be conflict of interest issues due to the mere possibility of receiving future funding from certain companies.)
(Also, I take it that “private” here means private sector; i.e. this statement applies to public companies as well?)
Great, this seems super important! Maybe there should be a policy that allows funding from a non-EA source only if all the board members approve it.
In many potential future situations it won’t be obvious whether certain funding might compromise the independence or accuracy of GovAI’s work; and one’s judgment about it will be subjective and could easily be influenced by biases (and it could be very tempting to accept the funding).