Announcing the GovAI Policy Team
The AI governance space needs more rigorous work on what influential actors (e.g. governments and AI labs) should do in the next few years to prepare the world for advanced AI.
We’re setting up a Policy Team at the Centre for the Governance of AI (GovAI) to help address this gap. The team will primarily focus on AI policy development from a long-run perspective. It will also spend some time on advising and advocating for recommendations, though we expect to lean heavily on other actors for that. Our work will be most relevant for the governments of the US, UK, and EU, as well as AI labs.
We plan to focus on a handful of bets at a time. Initially, we are likely to pursue:
Compute governance: Is compute a particularly useful governance node for AI? If so, how can this tool be used to meet various AI governance goals? Potential goals for compute governance include monitoring capabilities, restricting access to capabilities, and identifying high-risk systems such that they can be put to significant scrutiny.
Corporate governance: What kinds of corporate governance measures should frontier labs adopt? Questions include: What can we learn from other industries to improve risk management practices? How can the board of directors most effectively oversee management? How should ethics boards be designed?
AI regulation: What present-day AI regulation would be most helpful for managing risks from advanced AI systems? Example questions include: Should foundation models be a regulatory target? What features of AI systems should be mandated by AI regulation? How can we help create more adaptive and expert regulatory ecosystems?
We’ll try several approaches to AI policy development, such as:
Back-chaining from desirable outcomes to concrete policy recommendations (e.g. how can we increase the chance there are effective international treaties on AI in the future?);
Considering what should be done today to prepare for some particular event (e.g. the US government makes an Apollo Program-level investment in AI);
Articulating and evaluating intermediate policy goals (e.g. “ensure the world’s most powerful AI models receive external scrutiny by experts without causing diffusion of capabilities”);
Analyzing what can and should be done with specific governance levers (e.g. the three bets outlined above);
Evaluating existing policy recommendations (e.g. increasing high-skilled immigration to the US and UK);
Providing concrete advice to decision-makers (e.g. providing input on the design of the US National AI Research Resource).
Over time, we plan to evaluate which bets and approaches are most fruitful and refine our focus accordingly.
The team currently consists of Jonas Schuett (specialization: corporate governance), Lennart Heim (specialization: compute governance), and myself (team lead). We’ll also collaborate with the rest of GovAI and people at other organizations.
We’re looking to grow the team. We’re hiring Research Scholars (deadline: August 7th), hoping to add 2 people to the team. We’re also planning to work with people in the GovAI 3-month Fellowship (Winter Fellowship deadline: August 4th) and are likely to open applications for Research Fellows in the near future (you can submit expressions of interest now). We’re happy for new staff to work out of Oxford (where most of GovAI is based), the Bay Area (where I am based), or remotely.
If you’d like to learn more, feel free to leave a comment below or reach out to me at markus.anderljung@governance.ai.
Hi Markus, this sounds really promising. I’ve been wanting to ask—does GovAI have any available opportunities for very early career EAs (e.g., in undergrad), and if not, do you plan to offer some in the future? I’ve been interested in AI policy/governance for a while but I’m not quite sure on where to really begin working and engaging with the field.
Hi Lexley, Good question. Kirsten’s suggestions are all great. To that, I’d add:
Try to work as a research assistant to someone who you think is doing interesting work. Quite often, more so than other roles, RA roles are quite often not advertised and set up on a more ad hoc basis. Perhaps the best route in is to read someone’s work and
Another thing you could do is to try to take a stab independently on some important-seeming question. You could e.g. pick a research question hinted at in a paper/piece (some have a section specifically with suggestions for further work), mentioned in a research agenda (e.g. Dafoe 2018), or in lists of research ideas (GovAI collated one here and Michael Aird, I think, sporadically updates this collection of lists of EA-relevant research questions).
My impression is that you can join the AGI Safety Fundamentals as an undergrad.
You could also look into the various “ERIs”: SERI, CHERI, CERI, and so on.
As for GovAI, we have in the past engaged undergrads as research assistants and I could imagine us taking on particularly promising undergrads for the GovAI Fellowship. However, overall, I expect our comparative advantage will be working with folks who either have significant context on AI governance or who have relevant experience from some other domain. It may also lay in producing writing that can help people navigate the field.
One other option: My AI Governance and Strategy team at Rethink Priorities offers 3-5 month fellowships and permanent research assistant roles, either of which can be done at anywhere from 20h/w to 40h/w depending on the candidates’ preference. And we hire almost entirely based on performance in our work tests & interviews rather than on credentials/experience (though of course experience often helps people succeed in our work tests & interviews), and have sometimes hired people during or right after undergrad degrees.
We aren’t currently actively hiring, but people can express interest here.
(I just happened to read this post because I’m interested in GovAI, and then realised my team’s roles seem relevant to this thread—I don’t originally come here to do recruiting :)
Also, I’m really excited about GovAI’s work and about them getting great hires, and I’d suggest people typically apply to many orgs/roles and see what happens rather than trying to just choose one or a few to apply to.)
Yeah, I update that whenever I learn of a new relevant collection of research questions.
That said, fwiw, I’d generally recommend that people interested in getting into research in some area:
Focus mostly on things like applying to jobs, expressing interest in working with some mentor, or applying to research training programs like the ERIs.
See independent research as only (a) a “cheap test of fit” that you spend a few days on on weekends and such, rather than a few months on, or (b) a backup option if applying to lots of roles isn’t yet working on, or a thing you do while waiting to hear back.
Some people/situations would be exceptions to that general advice, but generally I think having more structure, mentorship, feedback, etc. is better.
Thank you so much for taking the time reply! There’s so many availabe resources and most advice doesn’t seem to be aimed at people in my current career level, so these are really helpful in nudging me to the right direction :D
Hi Lexley, I’m sure Markus will come back with an answer, but I thought I’d suggest some other ways an undergraduate or new grad could build their knowledge and credibility:
a) Write a relevant essay or do a project for one of your classes. For example, if you’re taking a political science or economics class, you could write an essay about “Does [major theory we’ve studied] explain what we’re seeing in the current governance of AI?” You could share your essay for feedback on the Facebook group “Effective Altruism Editing and Review” and potentially even post it here, or post a summary.
b) Take an internship or job somewhere that you can learn about government or governance. For example, working in local or national government; working for a regulator; working for a corporate governance body like “fair trade” or “organic”; working for a tech company or lobbyist, especially if you can get a job taking notes for their boards or something like that. Pay attention to who’s making decisions, and who the decision-makers pay attention to—who has the power in different situations?
c) Read papers and articles in the area you’re interested in, and leave polite comments or questions. If a professor at your university has written a paper you think might be relevant, go to their office hours or ask to meet them and ask them some questions about how their work could be applied to AI governance. Consider starting a blog writing summaries or reviews of relevant papers and/or introducing some of your own thoughts. Consider going on Twitter, following people you admire, and replying to them occasionally.
I hope these ideas are useful and please let me know if you try them! I’m @Kirsten3531 on Twitter if you decide to go the Twitter route :)
Hi Kirsten, thank you so much for this write-up!! :D This is really the sort of guidance I’ve been searching for, since most advice seems to be primarily aimed at those in their mid-career or those who have already held senior positions. Will follow you on Twitter if that’s okay! (Also just realized we interacted earlier today, I’m @doseofzero :> )
Just popping in to say you might find this post (of mine) useful: Interested in EA/longtermist research careers? Here are my top recommended resources Also this comment I left on it:
“Resources that are only relevant to people interested in AI governance and (to some extent) technical AI safety
You could participate in the AGI Safety Fundamentals course’s Governance track, or—when the course isn’t running—work through all or part of the curriculum independently. This seems like an unusually good way for most people to learn about AI risk and AI governance (from a longtermist or existential-risk-focused perspective).
Description of some organizations relevant to long-term AI governance (non-exhaustive) (2021) collects and overviews some organizations you might be interested in applying to. (This link is from week 7 of the AGI Safety Fundamentals course’s Governance track.)
I think Some AI Governance Research Ideas would be my top recommendation for a public list of AI governance research ideas.
But I’d suggest being discerning with this list, as I also think that some of those ideas are relatively low-priority and that the arguments presented for prioritizing those particular ideas are relatively weak, at least from a longtermist/existential-risk-focused perspective.”
“I’d suggest being discerning with this list”
Definitely agree with this!
Great development. Does this mean GovAI will start inputting to more government consultations on AI and algorithms? The UK gov recently published a call for input on its AI regulation strategy—is GovAI planning to respond to it? On the regulation area—there’s a lot of different areas of regulation (financial, content, communication infra, data protection, competition and consumer law), and the UK gov is taking a decentralised approach, relying on individual regulators’ areas of expertise rather than creating a central body. How will GovAI stay on top of these different subject matter areas?
We’ve already started to do more of this. Since May, we’ve responded to 3 RFIs and similar (you can find them here: https://www.governance.ai/research): the NIST AI Risk Management Framework; the US National AI Research Resource interim report; and the UK Compute Review. We’re likely to respond to the AI regulation policy paper. Though we’ve already provided input to this process via Jonas Schuett and I being on-loan to the Brexit Opportunities Unit to think about these topics for a few months this spring.
I think we’ll struggle to build expertise in all of these areas, but we’re likely to add more of it over time and build networks that allow us to input in these other areas should we find doing so promising.