The case for building expertise to work on US AI policy, and how to do it

We re­cently com­pleted an in-depth ar­ti­cle on US AI policy ca­reers that should be of in­ter­est to many peo­ple on this fo­rum. It be­gins:

At 80,000 Hours we think a sig­nifi­cant num­ber of peo­ple should build ex­per­tise to work on United States (US) policy rele­vant to the long-term effects of the de­vel­op­ment and use of ar­tifi­cial in­tel­li­gence (AI).

In this ar­ti­cle we go into more de­tail on this claim, as well as dis­cussing ar­gu­ments in fa­vor and against. We also briefly out­line which spe­cific ca­reer paths to aim for and dis­cuss which sorts of peo­ple we think might suit these roles best.

This ar­ti­cle is based on mul­ti­ple con­ver­sa­tions with three se­nior US Govern­ment offi­cials, three fed­eral em­ploy­ees work­ing on sci­ence and tech­nol­ogy is­sues, three con­gres­sional staffers, and sev­eral other peo­ple who have served as ad­vi­sors to gov­ern­ment from within academia and non-prof­its. We also spoke with sev­eral re­search sci­en­tists at top AI labs and in academia, as well as rele­vant ex­perts from foun­da­tions and non­prof­its.

We have hired Niel Bow­er­man as our in-house spe­cial­ist on AI policy ca­reers. If you are a US cit­i­zen in­ter­ested in pur­su­ing a ca­reer in AI pub­lic policy, please let us know and Niel may be able to work with you to help you en­ter this ca­reer path.


  • The US Govern­ment is likely to be a key ac­tor in how ad­vanced AI is de­vel­oped and used in so­ciety, whether di­rectly or in­di­rectly.

  • One of the main ways that AI might not yield sub­stan­tial benefits to so­ciety is if there is a race to the bot­tom on AI safety. Govern­ments are likely to be key ac­tors that could con­tribute to an en­vi­ron­ment lead­ing to such a race, or could ac­tively pre­vent one.

  • Good sce­nar­ios seem more likely if there are more thought­ful peo­ple work­ing in gov­ern­ment who have ex­per­tise in AI de­vel­op­ment and are con­cerned about its effects on so­ciety over the long-term.

  • This is a high-risk, high-re­ward ca­reer op­tion, and there is a chance that pur­su­ing this ca­reer path will re­sult in lit­tle so­cial im­pact over your ca­reer. How­ever we think there are sce­nar­ios in which this work is re­mark­ably im­por­tant, and so the over­all value of work on AI policy seems high.

  • We think there is room for hun­dreds of peo­ple to build ex­per­tise and ca­reer cap­i­tal in roles that may one day al­low them to work on the most rele­vant ar­eas of AI policy.

  • If you’re a thought­ful Amer­i­can in­ter­ested in de­vel­op­ing ex­per­tise and tech­ni­cal abil­ities in the do­main of AI policy, then this may be one of your high­est im­pact op­tions, par­tic­u­larly if you have been to or can get into a top grad school in law, policy, in­ter­na­tional re­la­tions or ma­chine learn­ing. (If you’re not Amer­i­can, work­ing on AI policy may also be a good op­tion, but some of the best long-term po­si­tions in the US won’t be open to you.)

Table of contents

  • The prob­lem to solve

  • Key actors

  • What do we mean by US AI pub­lic policy ca­reers?

  • Why pur­sue this path?

  • Ar­gu­ments against

  • How to pur­sue a ca­reer in US AI pub­lic policy

  • Be care­ful to avoid do­ing harm

  • Fit: Who is es­pe­cially well placed to pur­sue this ca­reer?

  • Ques­tions for fur­ther investigation

  • Conclusion

  • Fur­ther reading

Con­tinue read­ing the full ar­ti­cle...

Photo of Congress