The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.)[1] We’re presumably missing out on a lot of talent.
I’m not sure what the solution is, or even what the problem is—I think it’s somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it’s dropping the ball and it’s nobody’s job to notice and nobody has great solutions anyway].
If you have information or takes, I’d be excited to learn. If you’ve been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I’d be really excited to hear your perspective (feel free to PM).
(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically—kudos to them. Maybe I should ask them why they haven’t expanded to AI strategy or if they have takes on that pipeline. For now, maybe they’re evidence that someone prioritizing pipeline-improving is necessary for it to happen...)
- ^
Added on May 24: the comments naturally focused on these examples, but I wasn’t asserting that summer research programs or courses are the most important bottlenecks—they just were salient to me recently.
I totally agree there’s a gap here. At BlueDot Impact (/ AGI safety fundamentals), we’re currently working on understanding the pipeline for ourselves.
We’ll be launching another governance course in the next week, and in the longer term we will publish more info on governance careers on our website, as and when we establish the information for ourselves.
In the meantime, there’s great advice on this account, mostly targeted at people in the US, but there might be some transferrable lessons:
https://forum.effectivealtruism.org/users/us-policy-careers
May I just add that, as someone who self-studied my way through the public reading list recently, I’d rate many of the resources there very highly.
It’s worth mentioning the Horizon Fellowship and RAND Fellowship.
I also have the impression that there’s a gap and would be interested in whether funders are not prioritizing it too much, or whether there’s a lack of (sufficiently strong) proposals.
Another AI governance program which just started its second round is Training For Good’s EU Tech Policy fellowship, where I think the reading and discussion group part has significant overlap with the AGISF program. (Besides that it has policy trainings in Brussels plus for some fellows also a 4-6 months placement at an EU think tank.)
This is a timely post. It feels like funding is a critical obstacle for many organisations.
One idea: Given the recent calls by many tech industry leaders for rapid work on AI governance, is there an opportunity to request direct funding from them for independent work in this area.
To be very specific: Has someone contacted OpenAI and said: “Hey, we read with great interest your recent article about the need for governance of superintelligence. We have some very specific work (list specific items) in that area which we believe can contribute to making this happen. But we’re massively understaffed and underfunded. With $1m from you, we could put 10 researchers working on these questions for 1 year. Would you be willing to fund this work?”
What’s in it for them? Two things:
If they are sincere (as I believe they are), then they will want this work to happen, and some groups in the EA sphere are probably better placed to make it happen than they themselves are.
We can offer independence (any results will be from the EA group, not from OpenAI and not influenced or edited by OpenAI) but at the same time we can openly credit them with funding this work, which would be good PR and a show of good faith on their part.
Forgive me if this is something that everyone is already doing all the time! I’m still quite new to EA!
Given the (accusations of) conflicts of interest in OpenAI’s calls for regulation of AI, I would be quite averse to relying on OpenAI for funding for AI governance