Aptitudes for AI governance work

I outline 8 “aptitudes” for AI governance work. For each, I give examples of existing work that draws on the aptitude, and a more detailed breakdown of the skills I think are useful for excelling at the aptitude.

How this might be helpful:

  • For orienting to the kinds of work you might be best suited to

  • For thinking through your skill gaps for those kinds of work

  • Offering an abstraction which might help those thinking about field-building/​talent pipeline strategy

Epistemic status:

  • I’ve spent ~3 years doing full-time AI governance work. Of that, I spent ~6 months FTE working on questions related to the AI governance talent pipeline, with GovAI.

  • My work has mostly been fairly foundational research—so my views about aptitudes for research-y work (i.e. the first four aptitudes in this post) are more confident than for more applied or practical work (i.e. the latter three aptitudes in this post).

  • I’ve spent ~5 hours talking with people hiring in AI governance about the talent needs they have. See this post for a write-up of that work. I’ve spent many more hours talking with AI governance researchers about their work (not focused specifically on talent needs).

  • This post should be read as just one framework that might help you orient to AI governance work, rather than as making strong claims about which skills are most useful.

Some AI governance-relevant aptitudes

Macrostrategy

What this is: investigating foundational topics that bear on more applied or concrete AI governance questions. Some key characteristics of this kind of work include:

  • The questions are often not neatly scoped, such that generating or clarifying questions is part of the work.

  • It involves balancing an unusually wide or open-ended range of considerations.

  • A high level of abstraction is involved in reasoning.[1]

  • The methodology is often not very clear, such that you can’t just plug-and-play with some standard methodology from a particular field.

Examples:

  • Descriptive work on estimating certain ‘key variables’

  • Prescriptive work on what ‘intermediate goals’[2] to aim for

    • E.g. analysis of the impact of US govt 2022 export controls.

  • Conceptual work on developing frameworks, taxonomies, models, etc. that could be useful for structuring future analysis

Useful skills:

  • Generating, structuring, and weighing considerations. Being able to generate lots of different considerations for a given question and weigh up these considerations appropriately.

    • For example, there are a lot of considerations that bear on the question “Would it reduce AI risk if the US government enacted antitrust regulation that prevents big tech companies from buying AI startups?”

      • Some examples of considerations are: “How much could this accelerate or slow down AI progress?”, “How much could this increase or decrease Western AI leadership relative to China?”, “How much harder or easier would this make it for the US government to enact safety-focused regulations?” “How would this affect the likelihood that a given company (e.g., Alphabet) plays a leading role in transformative AI development?” etc.

      • Each of these considerations is also linked to various other considerations. For instance, the consideration about the pace of AI progress links to the higher-level consideration “How does the pace of AI progress affect the level of AI risk?” and the lower-level consideration “How does market structure affect the pace of AI progress?” That lower-level consideration can then be linked to even lower levels, like “What are the respective roles of compute-scaling and new ideas in driving AI progress?” and “Would spreading researchers out across a larger number of startups increase the rate at which new ideas are generated?”

    • Being able to (at least implicitly) build up these kinds of structures of considerations is an important skill, particularly for reasoning about the all-things-considered effect of some policy that’s being considered.

    • It’s also really valuable to develop good intuitions about the relative importance of different considerations and the tractability of gaining clarity about them: to avoid being bogged down, you need to make decisions about what you will actively dig into and what you will mostly assume or ignore. (This skill relates closely to the skill of using abstraction well.)

  • Using abstraction well. Abstraction—ignoring details to simplify the topic you’re thinking about—is an essential tool for reasoning, especially about macro issues. It saves you cognitive effort, allows you to reason about a larger set of similar cases at the same time, and prompts you to think more crisply. However, details will often matter a lot in practice, and people can underestimate how much predictive power they lose by abstracting.

  • Knowledge about AI (e.g. roughly how modern AI systems work) and AI threat models is important for lots of this work.

  • Sometimes, the “vices” of laziness, impatience, hubris and self-preservation are important. See this post for more on that.

Interlude: skills that are useful across many aptitudes

Under each aptitude, this post lays out skills that seem useful for excelling at it. But there are some skills that are useful across many aptitudes. To avoid repetition, I’m going to list those here.

Useful skills for all the aptitudes

  • Impact focus. The motivation to have an impact through your research, and ability to reason about what it takes to produce this impact. Being scope sensitive in the way you think about impact.

  • Productivity. All else equal, doing things more quickly is better.

Useful skills for research aptitudes

  • Good epistemics, in particular:

    • Scout mindset. The motivation to see things as they are, not as you wish they were; to clearly and self-critically evaluate the strongest arguments on both sides. See this book for more.

    • Reasoning transparency. Communicating in a way that prioritises the sharing of information about underlying general thinking processes. See this post for more.

    • Appropriately weighing evidence. Having an accurate sense of how much information different types of evidence—e.g., regression analyses, expert opinions, game theory models, historical trends, and common sense—provide is crucial for reaching an overall opinion on a question. In general, researchers should be wary of over-valuing a particular form of evidence, e.g., deferring too much to experts or making strong claims based on a single game theory model or empirical study.

  • Comfort with quantitative analysis. Even if you don’t often use quantitative research methods yourself, it will probably be useful to read and understand quantitative analyses a non-trivial amount of the time. So, although it is definitely not necessary to have a STEM background, it is useful to be comfortable dealing with topics like probability, statistics, and expected value.

  • Writing. See this post for some thoughts on why and how to improve at writing.

  • Ability to get up to speed in an area quickly.

  • Rigour and attention to detail.

Policy development

What this is: taking “intermediate goals” (e.g. “improve coordination between frontier AI labs”) and developing concrete[3] proposals for achieving them. Some key characteristics of this kind of work include:

  • Often has similar features to macrostrategy work, especially when it requires navigating a relatively open design space, and understanding and weighing many second-order effects that a policy might have.

  • Requires familiarity with relevant institutions.

Examples:

  • Proposals for licensing regimes requiring AI labs to seek approval from some third-party body before training certain kinds of AI systems.

  • Proposals for international AI safety agreements.

Useful skills:

  • Familiarity with relevant institutions (e.g. governments, AI labs)
    • E.g. how policymaking works in the institution; knowing the difference between the on-paper and in practice versions of that; knowing how to ask questions which elucidate that difference; understanding the current political climate in the institution.

    • Actually having experience in/​adjacent to the institution is very helpful, though not strictly necessary.

  • Additionally, policy development work can involve more or less of the macrostrategy aptitude.

    • When developing policies to achieve some pre-determined, well-scoped objective, the macrostrategy aptitude is less necessary.

    • However, to the extent that figuring out or clarifying the policy objective is part of the work, the skills listed under the macrostrategy aptitude are important.

  • Plus: the skills that are useful across many aptitudes.

Well-scoped research

What this is: answering well-scoped questions[4] that are useful for AI governance.

Examples:

Useful skills:

  • Domain knowledge/​subject expertise. Although being familiar with a range of areas can be helpful, it is often very valuable to know a lot about one or two particular topics—ones that are especially important and where few other experts exist.

    • Some example relevant subjects include: AI hardware, information security, the Chinese AI industry, …

  • Plus: the skills that are useful across many aptitudes.

Distillation

What this is: clarifying ideas, working out how best to present them, and writing them up (rather than coming up with new ideas).

Examples:

Useful skills:

  • Using abstraction well

  • Plus the skills that are useful across many aptitudes (especially writing)

Public comms

What this is: communicating about AI issues (to e.g. ML researchers, AGI labs, policymakers, the public) to foster an epistemic environment that favours good outcomes (e.g. more people believe AI could be really dangerous).

Examples:

Useful skills:

Political and bureaucratic aptitudes

What this is: advancing into some high-leverage role within (or adjacent to) a key government, AI lab or other institution, from which you can help it to make decisions that lead to good outcomes from advanced AI.

Examples:

  • Individuals working in, or advising, key governments and AI labs

Useful skills:

  • A good understanding of the relevant messy human institution(s)

  • Strong social skills, emotional intelligence and verbal communication

  • Professionalism

  • Certain kinds of credentials are useful to different degrees in different environments, e.g.

    • Having a PhD grants more on-paper seniority within certain relevant AI labs

    • Having a Master’s degree makes it easier to be hired by certain relevant think-tanks

  • Project management is often useful, since this work often involves coordinating people to get things done

  • Plus: the skills that are useful across many aptitudes.

Management and mentorship

What this is: directing and coordinating people to do useful work, and enabling them to become excellent.[5]

Examples:

Useful skills:

  • People management, e.g.

    • Providing support and coaching to direct reports/​mentees

    • Stakeholder management

  • Project management

  • Hiring

  • Being good at the object-level work you’re managing/​mentoring people in

  • Plus: the skills that are useful across many aptitudes.

Caveats

  • Whilst I’ve tried to cover a lot of ground here, I expect to have missed some things. In particular, I don’t expect the lists of skills required to be complete.

  • Something that’s kind of missing is a “leadership” aptitude. I often think of “leadership” as a composite aptitude, composed of management and strategy.

  • A grantmaking aptitude, and something like a “field-building” aptitude, are also missing.

  • Many people doing valuable work are strong in several aptitudes; I am not trying to give the impression that you should necessarily try to specialise in a given aptitude.

Thanks to Charlotte Siegmann and Rose Hadshar for comments and conversations, and to Ben Garfinkel for guidance and feedback.

  1. ^

    Note that it’s normally important to be able to move back and forth between different levels of abstraction. Otherwise, a pitfall of this kind of work is either getting lost in Abstraction Land, or not being sufficiently attentive to relevant empirical facts.

  2. ^

    By ‘intermediate goal’, I mean a goal for improving the lasting impacts of AI that’s more specific and directly actionable than a high-level goal like ‘reduce risk of power-seeking AI’ but is less specific and directly actionable than a particular intervention. E.g. something like ‘improve coordination between frontier AI labs’.

  3. ^

    Eventually, proposals need to be very concrete, e.g. “[this office] should use [this authority] to put in place [this regulation] which will have [these technical details]. And they’re not going to want to do it for [these reasons]. [This set of facts] will be adequate to convince them to do it anyway.” Normally there will be intermediate work that isn’t as concrete as this.

  4. ^

    By ‘well-scoped questions’, I mean ones which don’t require further clarification and have a fairly clear methodology which can be applied to answer them.

  5. ^

    Management and mentorship seem like somewhat different skillsets to me—in particular, it seems possible to be excellent at mentorship but not at other aspects of management—but they blur into each other enough that I’ve grouped them.

No comments.