Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.
GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.
My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:
We’re not yet decided on how we’ll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:
We’re aiming for a board that does not have a majority of folks from any of: industry, policy, academia.
Allan will be the co-lead of the organisation. We hope to be able to announce others soon.
Whenever someone has a clear conflict of interest regarding a candidate or a piece of research – say we were to publish a ranking of how responsible various AI labs were being – we’ll have the person recuse themselves from the decision.
For context, I expect most folks who collaborate with GovAI to not be directly paid by GovAI. Most folks will be employed elsewhere and not closely line managed by the organization.
Thanks! I agree that using a term like “socially beneficial” might be better. On the other hand, it might be helpful to couch self-governance proposals in terms of corporate social responsibility, as it is a term already in wide use.
Some brief thoughts (just my quick takes. My guess is that others might disagree, including at GovAI):
Overall, I think the situation is quite different compared to 2018, when I think the talk was recorded. AI governance / policy issues are much more prominent in the media, in politics, etc. The EU Commission has proposed some pretty comprehensive AI legislation. As such, there’s more pressure on companies as well as governments to take action. I think there’s also better understanding of what AI policy is sensible. All these things update me against 1 (insofar as we are still in the formative stages) and 2. They also update me in favour of thinking something like: governments will want to take a bunch of actions related to AI and so we should try to steer those actions in positive directions.
I think the AI policy / governance field is mature enough at this point that it’s not that helpful to think of an AI governance regime as one unitary thing. I much prefer thinking about specific areas of AI governance. Depending on the area, I’d likely have different views on 1-3. For example, it seems likely that companies are best placed to help develop standards that may be used to inform legislation further down the line. I wouldn’t expect companies to be best placed to figure out what the US should do wrt updates to antitrust regulation.
On 3, I think it’s true that companies have incentives in favour of acting prosocially and that we can boost these incentives. I’m not sure those incentives outweigh their other incentives, though. The view is not that e.g. Facebook, Amazon, Google, are all-things-considered going to act in the public interest. I also don’t think Jade-2018 held that view.
Happy to give my view. Could you say something about what particular views or messages you’re curious about? (I don’t have time to reread the script atm)
Thanks Michael! Yeah, I hope it ends up being helpful.
I’m really excited to see LTFF being in a position to review and make such a large number of grants. IIRC, you’re planning on writing up some reflections on how the scaling up has gone. I’m looking forward to reading them!
Thanks for pointing that out, Michael! Super helpful.
You can find the talk here.
Thanks for the catch :) Should be updated now
Hello, I work at the Centre for the Governance of AI at FHI. I agree that more work in this area is important. At GovAI, for instance, we have a lot more talented folks interested in working with us than we have absorptive capacity. If you’re interested in setting something up at MILA, I’d be happy to advice if you’d find that helpful. You could reach out to me at firstname.lastname@example.org
That’s exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I’m remembering correctly!)
To make it maximally helpful, you’d work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that’s not possible (we do get a lot more good applications than we’re able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you’re saying and whether it will be relevant to their interests.
It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn’t be my first port of call. But it’s not clear to me that this means we shouldn’t have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.
A first guess at what might be meant by AI governance is “all the non-technical stuff that we need to sort out regarding AI risk”. Wonder if that’s close to the mark?
A great first guess! It’s basically my favourite definition, though negative definitions probably aren’t all that satisfactory either.
We can make it more precise by saying (I’m not sure what the origin of this one is, it might be Jade Leung or Allan Dafoe):
AI governance has a descriptive part, focusing on the context and institutions that shape the incentives and behaviours of developers and users of AI, and a normative part, asking how should we navigate a transition to a world of advanced artificial intelligence?
It’s not quite the definition we want, but it’s a bit closer.
It’s a little hard to say, because it will largely depend on who we end up hiring. Taking into account the person’s skills and interests, we will split up my current work portfolio (and maybe add some new things into the mix as well). That portfolio currently includes:
Operations: Taking care of our finances (including some grant reporting, budgeting, fundraising) and making sure we can spend our funds on what we want (e.g. setting up contracts, sorting out visas). It also includes things like setting up our new office and maintaining our website. A lot of our administrative / operations tasks are supported by central staff at FHI, which is great.
Team management: Making sure everyone on the team is doing well and helping improve their productivity. This includes organising the team meetings and events, having regular check-ins with everyone.
Recruitment: Includes taking our various hiring efforts to fruition, such as those that are currently ongoing, but also helping onboard and support folks once they join. I’ve for example spent time supervising a few of our GovAI Fellows as well as Summer Research Fellows. It also includes being on the lookout for and cultivating relationships with folks we might want to hire in the future, by bringing them over for visits, having them do talks etc.
Outreach: This can include doing talks, and organising various events. Currently we’re running a webinar series that I think the new PM would be well-suited to take over responsibility of. In the future, this could mean organising conferences as well.
Research management: This includes a lot of activities usually done in collaboration with the rest of the team, ranging from just checking in on research and making sure it’s progressing as planned, to giving in-depth feedback and steering, to deciding where and how something should be published, to in some cases co-authoring pieces. This work requires a lot of context and understanding of the field.
Policy Engagement: We’re starting to put more work into policy engagement, but it’s still in its early stages. There’s a lot of room to do more. Currently, this primarily consists of scanning for opportunities that seem particularly high value and engaging in those. In the future, I’d like us to become more proactive, e.g. defining some clear policy goals and figuring out how to increase the chance they’re realised.
Strategy: Working with Allan and the rest of the team to decide what we should be spending our time on.
I think the most likely thing is that the person will start by working on things like operations, team management, recruitment, and helping organise events. As they absorb more context and develop a better understanding of the AI governance space, they’ll take on more responsibility in other areas such as policy engagement, research management, recruitment, strategy, or other new projects we identify.
Unfortunately, I’m not on that selection committee, and so don’t have that detailed insight. I do know that there was quite a lot of applications this year, so it wouldn’t surprise me if the tight deadlines originally set end up slipping a little.
I’d suggest you email: email@example.com
Probably there are a bunch more useful traits I haven’t pointed to
Could you say more about the different skills and traits relevant to research project management?
Understanding the research: Probably the most important factor is that you’re able to understand the research. This entails knowing how it connects to adjacent questions / fields, having well thought-out models about the importance of the research. Ideally, the research manager is someone who could contribute, at least to some extent, to the research they’re helping manage. This often requires a decent amount of context on the research, often having spent a significant amount of time reading the relevant research and talking to the relevant people.
Common sense & wide expertise: One way in which you can help as a research manager is often to suggest how the research relates to work by others, and so having decently wide intellectual interests is useful. You also want to have a decent amount of common sense to help make decisions about things like where something should be published and what ways a research project could go wrong.
Relevant epistemic virtues: Just like a researcher, it seems important to have incorporated epistemic virtues like calibration, humility, and other truth-seeking behaviours. As a research manager, you might be the main person that needs to communicate these virtues to new potential researchers.
People skills: Seems very important. Being able to do things like helping people become better researchers by getting to know what motivates them, what tends to block them, etc. Also being able to deal with potential conflicts and sensitive situations that can arise in research collaborations.
Inclination: I think there’s a certain kind of inclination that’s helpful to do research management. You’re excited about dabbling in a lot of different questions, more so than really digging your head down and figuring out one question in depth. You’re perhaps better at providing ideas, structure, conceptual framing, feedback, than doing the nitty-gritty of producing all the research yourself. You also probably need to be fine with being more of a background figure, and let the researchers shine.
It is indeed! Editing the comment. Thanks!