Long-term AI policy strategy research and implementation

Link post

In a nutshell: Advancing AI technology could have both huge upsides and huge downsides, including potentially catastrophic risks. To manage these risks, we need people making sure the deployment of AI goes well, by thinking about how to:

  • Ensure broad sharing of the benefits from developing powerful AI systems.

  • Avoid exacerbating military competition or conflict caused by increasingly powerful AI systems.

  • Ensure that the groups that develop AI work together to develop and implement safety features.

If you are well suited to this career, it may be the best way for you to have a social impact.

Review status

Based on a medium-depth investigation

Why might working to improve AI policy be high impact?

As we’ve argued, in the next few decades, we might see the development of powerful machine learning algorithms with the potential to transform society. This could have major upsides and downsides, including the possibility of catastrophic risks.

To manage these risks, we need technical research into the design of safe AI systems (including the ‘alignment problem’), which we cover in a separate career review.

But in addition to solving the technical problems, there are many other important questions to address. These can be roughly categorised into three key challenges of transformative AI strategy:

  • Ensuring broad sharing of the benefits from developing powerful AI systems, rather than letting AI’s development harm humanity or unduly concentrate power.

  • Avoiding exacerbating military competition or conflict caused by increasingly powerful AI systems.

  • Ensuring that the groups that develop AI work together to develop and implement safety features.

We need a community of experts who understand the intersection of modern AI systems and policy, and work together to mitigate long-term risks and ensure humanity reaps the benefits of advanced AI.

What does this path involve?

Experts in AI policy strategy would broadly carry out two overlapping activities:

  1. Research — to develop strategy and policy proposals.

  2. Implementation — working together to put policy into practice.

We see these activities as equally important as the technical ones, but currently they are more neglected. Many of the top academic centres and AI companies have started to hire researchers working on technical AI safety, and there’s perhaps a community of 20–50 full-time researchers focused on the issue. However, there are only a handful of researchers focused on strategic issues or working in AI policy with a long-term perspective.

Note that there is already a significant amount of work being done on nearer-term issues in AI policy, such as the regulation of self-driving cars. What’s neglected is work on issues that are likely to arise as AI systems become substantially more powerful than those in existence today — so-called ‘transformative AI‘ — such as the three non-technical challenges outlined above.

Some examples of top AI policy jobs to work towards include the following, which fit a variety of skill types:

Examples of people pursuing this path

Helen Toner

Helen worked in consulting before getting a research job at GiveWell and then Open Philanthropy. From there, she explored a couple of different cause areas, and eventually moved to Beijing to learn about the intersection of China and AI. When the Center for Security and Emerging Technology (CSET) was founded, she was recruited to help build the organisation. CSET has since become a leading think tank in Washington on the intersection of emerging technology and national security.
LEARN MORE

Ben Garfinkel

Ben graduated from Yale in 2016, where he majored in physics, math, and philosophy. After graduating, Ben became a researcher at the Centre for Effective Altruism and then moved to the Centre for the Governance of AI (GovAI) at the University of Oxford’s Future of Humanity Institute (now part of the Centre for Effective Altruism). He’s now the acting director there. As of December 2021, GovAI is hiring.
LEARN MORE

How to assess your fit

If you can succeed in this area, then you have the opportunity to make a significant contribution to what might well be the most important issue of the next century.

To be impactful in this path, a key question is whether you have a reasonable chance of getting some of the top jobs listed earlier.

The government and political positions require people with a well-rounded skillset, the ability to meet lots of people and maintain relationships, and the patience to work with a slow-moving bureaucracy. It’s also ideal if you’re a US citizen (which may be necessary to get security clearance), and don’t have an unconventional past that could create problems if you choose to work in politically sensitive roles.

The more research-focused positions would typically require the ability to get into a top 10 graduate school in a relevant area, and deep interest in the issues. For instance, when you read about the issues, do you get ideas for new approaches to them? Read more about predicting fit in research.

In addition, you should only enter this path if you’re convinced of the importance of long-term AI safety. This path also requires making controversial decisions under huge uncertainty, so it’s important to have excellent judgement, caution, and a willingness to work with others — or it would be easy to have an unintended negative impact. This is hard to judge, but you can get some information early on by seeing how well you work with others in the field.

How to enter this field

In the first few years of this path, you’d focus on learning about the issues and how government works, meeting key people in the field, and doing research, rather than pushing for a specific proposal. AI policy and strategy is a deeply complicated area, and it’s easy to make things worse by accident (e.g. see the Unilateralist’s Curse).

Some common early career steps include:

  • Relevant graduate study. Some especially useful fields include international relations, strategic studies, machine learning, economics, law, public policy, and political science. Our top recommendation right now is machine learning if you can get into a top 10 school in computer science. Otherwise, our top recommendations are: i) law school if you can get into Yale or Harvard, ii) international relations if you want to focus on research, and iii) strategic studies if you want to focus on implementation.

  • Working at a top AI company, especially DeepMind and OpenAI.

  • Working in any general entry-level government and policy positions (as listed earlier), which let you gain expertise and connections, such as think tank internships, being a researcher or staffer for a politician, joining a campaign, and government leadership schemes.

This field is at a very early stage of development, which creates a number of challenges. For one, the key questions have not been formalised, which creates a need for ‘disentanglement research‘ to enable other researchers to get traction. For another, there is a lack of mentors and positions, which can make it hard for people to break into the area.

Until recently, it’s been very hard to enter this path as a researcher unless you’re able to become one of the top (approximately) 30 people in the field relatively quickly. While mentors and open positions are still scarce, some top organisations have recently recruited junior and mid-career staff to serve as research assistants, analysts, and fellows. Our guess is that obtaining a research position will remain very competitive but positions will continue to gradually open up. On the other hand, the field is still small enough for top researchers to make an especially big contribution by doing field-founding research.

If you’re not able to land a research position now, then you can either (i) continue to build up expertise and contribute to research when the field is more developed, or (ii) focus more on the policy positions, which could absorb hundreds of people.

Most of the first steps on this path also offer widely useful career capital. For instance, depending on the sub-area you start in, you could often switch into other areas of policy, the application of AI to other social problems, operations, or earning to give. So, the risks of starting down this path, if you decide to switch later, are not too high.

  • The American Association for the Advancement of Science offers Science & Technology Policy Fellowships, which provide hands-on opportunities to apply scientific knowledge and technical skills to important societal challenges. Fellows are assigned for one year to a selected area of the United States federal government, where they participate in policy development and implementation.

  • The Center for Security and Emerging Technology at Georgetown University produces data-driven research at the intersection of security and technology (including AI, advanced computing, and biotechnology) and provides nonpartisan analysis to the policy community. See current vacancies.

  • The Centre for the Governance of AI is focused on building a global research community that’s dedicated to helping humanity navigate the transition to a world with advanced AI. See current vacancies.

  • The Centre for Long-Term Resilience facilitates access to the expertise of leading academics who work on long-term global challenges, such as AI, biosecurity, and risk management policy. It helps convert cutting-edge research into actionable recommendations that are grounded in the UK context.

  • DeepMind is probably the largest research group developing general machine intelligence in the Western world. We’re only confident about recommending DeepMind roles working specifically on safety, ethics, policy, and security issues. See current vacancies.

  • The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. Academics at FHI bring the tools of mathematics, philosophy, and social sciences to bear on big-picture questions about humanity and its prospects.

  • The Legal Priorities Project is an independent global research project founded by researchers from Harvard University. It conducts legal research that tackles the world’s most pressing problems, and is influenced by the principles of effective altruism and longtermism. See current vacancies.

  • OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe. It has received over $1 billion in funding commitments from the technology community. We’re only confident in recommending opportunities in their policy, safety, and security teams. See current vacancies.

  • United States Congress (for example, as a congressional staffer).

  • The United States Office of Science and Technology Policy works to maximise the benefits of science and technology to advance health, prosperity, security, environmental quality, and justice for all Americans.

  • The United States Office of the Secretary of Defense is where top civilian defence decision-makers work with the secretary to develop policy, make operational and fiscal plans, manage resources, and evaluate programmes. See current vacancies.

No comments.