We (Open Philanthropy) have a program called “Career Development and Transition Funding” (CDTF) — see this recent Forum post for more information on recent updates we made to it.
This program supports a variety of activities, including (but not necessarily limited to) graduate study, unpaid internships, self-study, career transition and exploration periods, postdocs, obtaining professional certifications, online courses, and other types of one-off career-capital-building activities. To learn more about the entire scope of the CDTF program, I’d encourage you to read this broad list of hypothetical applicant profiles that we’re looking for.
This brief post, which partially stems from research I recently conducted through conversations with numerous AI governance experts about current talent needs, serves as an addendum to the existing information on the CDTF page. Here, I’ll outline a few examples of the kinds of people (“talent profiles”) I’m particularly excited about getting involved in the AI governance space.
To be clear, this list includes only a few of the many different talent profiles for people I’d be excited to see apply to the CDTF program — there are many other promising profiles out there that I won’t manage to cover below.
My aim is not just to encourage more applicants from these groups to the CDTF program, but also to broadly describe what I see as some pressing talent pipeline gaps in the AI governance ecosystem.
Hypothetical talent profiles I’m excited about
A hardware engineer at a leading chip design company (Nvidia), chip manufacturer (TSMC), chip manufacturing equipment manufacturer (ASML), or cloud compute provider (Microsoft, Google, Amazon) who has recently become interested in developing hardware-focused interventions and policies that could reduce risks from advanced AI systems via improved coordination.
A machine learning researcher who wants to pivot to working on the technical side of AI governance research and policy, such as evaluations, threat assessments, or other aspects of AI control.
An information security specialist who has played a pivotal role in safeguarding sensitive data and systems at a major tech company, and would like to use those learnings to secure advanced AI systems from theft.
Someone with 10+ years of professional experience, excellent interpersonal and management skills, and an interest in transitioning to policy work in the US, UK, or EU.
A policy professional who has spent years working in DC, with a strong track record of driving influential policy changes. This individual could have experience either in government as a policy advisor or in a think tank or advocacy group, and has developed a deep understanding of the political landscape and how to navigate it. An additional bonus would be if the person has experience driving bipartisan policy change by working with people on both sides of the aisle.
A legal scholar or practicing lawyer with experience in technology, antitrust, and/or liability law, who is interested in applying these skills to legal questions relevant to the development and deployment of frontier AI systems.
A US national security expert with experience in international agreements and treaties, who wants to produce policy reports that consider potential security implications of advanced AI systems.
An experienced academic with an interdisciplinary background, strong mentoring skills, and a clear vision for establishing and leading an AI governance research institution at a leading university focused on exploring questions such as measuring and benchmarking AI capabilities.
A biologist who has either researched or directly helped craft biosecurity policy, and is now interested in exploring AI-biotechnology threat models and mitigation strategies.
A person with strong research, analytical, and writing skills, ideally suited for a career in journalism, who would like to focus their writing on the risks and benefits of AI progress.
An economist who specializes in studying economic growth, and is interested in forecasting future economic trends related to AI development and deployment.
A technology policy expert with experience in standard-setting, who would like to use their expertise to work on developing AI safety standards.
If any of these examples fit you, and you believe the CDTF program could increase your impact, I encourage you to apply. Or, if you know someone who fits any of those talent profiles, send this post to them.
Finally, if you identify with one of these talent profiles, don’t feel the CDTF program is right for your current goals, but are nonetheless interested in working on these topics in the future, please consider taking a moment to fill out this brief (under 5 minutes) Google form to share more about yourself and your interests. I’m mostly looking to get a sense for who is out there, but I might periodically reach out to respondents to share information about possible opportunities in the space.
This is really encouraging, thanks for writing it!
I filled the form and am also planning to apply for CDTF but to be honest I’m pretty uncertain about what the next steps should be in my case. Having a pure tech background (I think I qualify for #1) I find it really hard to navigate the policy space. I’m struggling with questions like which opportunities to apply for, what skills to learn/improve. I think mentoring would also be crucial for transitions like this. Is there also a programe out there to help with that? (Relevant fellowships seem to be flooded with good applicants right now).
Worth considering the Blue Dot course on AI Governance https://course.aisafetyfundamentals.com/governance
Hi Péter, thanks for your comment.
Unfortunately, as you’ve alluded to, technical AI governance talent pipelines are still quite nascent. I’m working on improving this. But in the meantime, I’d recommend:
Speaking with 80,000 Hours (can be useful for connecting you with possible mentors/opportunities)
Regularly browsing the 80,000 Hours job board and applying to the few technical AI governance roles that occasionally pop up on it
Reading 80,000 Hours’ career guide on AI hardware (particularly the bit about how to enter the field) and their write up on policy skills
Great to hear there’s work in progress on this! Looking forward to it!
Also thanks for the suggestions, will try to get the most out of these!