(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some key uncertainties in AI governance field-building
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
How best to find/upskill more people to do policy development work?
I think there are three main skillsets involved in policy development work:
Macrostrategy
“Traditional” policy development work (e.g. detailed understanding of how policymaking works in institution, to devise feasible policy action)
Impact focus (i.e. working to improve lasting impacts of AI in a scope-sensitive way)
A more concrete question is: which pillars to prioritise in talent search/selection vs upskilling. E.g. do you take people already skilled in X and Y and give them Z; X and Z and give them Y; etc. etc. ?
What are the most important profiles that aren’t currently being hired for, but nonetheless might matter?
Reasons why this seems important to get clarity on:
Focus on neglected aspects of the talent pipeline. People want to get hired, so will be trying to skill up for positions that are currently being hired for. Whereas for future positions—especially for positions that will never be “hired for” per se (e.g. leading a policy team that wouldn’t exist unless you pitched it), and “positions with a deadline”[1]—the standard career incentives to skill up for them aren’t as strong. Also, some people currently hiring are already trying to solve their own bottlenecks (e.g. designing efficient hiring processes to identify talent) whereas future people aren’t.[2]
Avoid myopically optimising the talent pipeline. The world will probably change a lot in the run up to advanced AI. This will affect the value of different talent pipeline interventions in three ways:
There will likely be more people interested in AI (governance). So, more people who want to do things, and hence more value in work that usefully leverages a large amount of labour.
The people who become interested may have different skills and inclinations, compared to the current talent pool. This will change the future comparative advantage of people we can currently find/upskill.
More concretely, you might think that people currently working on AI governance are disproportionately inclined towards macro/strategy, relative to the talent pool in, say, 5 years’ time. Optimising for ticking all the talent boxes by the end of the year might look like finding/upskilling people with deep knowledge in certain areas that we’re lacking (e.g. more lawyers). But if you instead think these people will just be drawn to the field once there are important questions they can answer, and that the community can usefully leverage their knowledge, this could suggest instead doubling down on building a community that’s excellent at strategy [I’m very uncertain about this particular line of reasoning, but think there might be some important thread to pull on here, more generally]
The nature of important work changes as we move into the AI endgame. E.g. probably less field-building, more public comms, more founding of new institutions, more policy development, etc.
To what extent should talent pipeline efforts treat AI governance as a (pre-)paradigmatic field?
More concrete versions of this question, that are all trying to get at the same thing:
On the current margin (now and in the future), how much of the most valuable work is crank-turn-y? By “crank-turn-y” I mean “work which can be delegated to sensible people even if they aren’t deeply integrated into the existing field”.
On the current margin (now and in the future), how high a premium should talent search/development efforts put on the macro/strategy aptitude?
On the current margin (now and in the future), how much of the most valuable work looks like contributing to an intellectual project that has been laid out (rather than doing the initial charting out of that intellectual project)?
Answers to these questions seem like they should affect: how quickly the field scales up, and how much we are trying to attract people who are excellent at crank-turn-y work vs strategy work. I lightly hold the intuition that erring on this question is a main way that this field could mess up.
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.
(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some key uncertainties in AI governance field-building
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
How best to find/upskill more people to do policy development work?
I think there are three main skillsets involved in policy development work:
Macrostrategy
“Traditional” policy development work (e.g. detailed understanding of how policymaking works in institution, to devise feasible policy action)
Impact focus (i.e. working to improve lasting impacts of AI in a scope-sensitive way)
A more concrete question is: which pillars to prioritise in talent search/selection vs upskilling. E.g. do you take people already skilled in X and Y and give them Z; X and Z and give them Y; etc. etc. ?
What are the most important profiles that aren’t currently being hired for, but nonetheless might matter?
Reasons why this seems important to get clarity on:
Focus on neglected aspects of the talent pipeline. People want to get hired, so will be trying to skill up for positions that are currently being hired for. Whereas for future positions—especially for positions that will never be “hired for” per se (e.g. leading a policy team that wouldn’t exist unless you pitched it), and “positions with a deadline”[1]—the standard career incentives to skill up for them aren’t as strong. Also, some people currently hiring are already trying to solve their own bottlenecks (e.g. designing efficient hiring processes to identify talent) whereas future people aren’t.[2]
Avoid myopically optimising the talent pipeline. The world will probably change a lot in the run up to advanced AI. This will affect the value of different talent pipeline interventions in three ways:
There will likely be more people interested in AI (governance). So, more people who want to do things, and hence more value in work that usefully leverages a large amount of labour.
The people who become interested may have different skills and inclinations, compared to the current talent pool. This will change the future comparative advantage of people we can currently find/upskill.
More concretely, you might think that people currently working on AI governance are disproportionately inclined towards macro/strategy, relative to the talent pool in, say, 5 years’ time. Optimising for ticking all the talent boxes by the end of the year might look like finding/upskilling people with deep knowledge in certain areas that we’re lacking (e.g. more lawyers). But if you instead think these people will just be drawn to the field once there are important questions they can answer, and that the community can usefully leverage their knowledge, this could suggest instead doubling down on building a community that’s excellent at strategy [I’m very uncertain about this particular line of reasoning, but think there might be some important thread to pull on here, more generally]
The nature of important work changes as we move into the AI endgame. E.g. probably less field-building, more public comms, more founding of new institutions, more policy development, etc.
To what extent should talent pipeline efforts treat AI governance as a (pre-)paradigmatic field?
More concrete versions of this question, that are all trying to get at the same thing:
On the current margin (now and in the future), how much of the most valuable work is crank-turn-y? By “crank-turn-y” I mean “work which can be delegated to sensible people even if they aren’t deeply integrated into the existing field”.
On the current margin (now and in the future), how high a premium should talent search/development efforts put on the macro/strategy aptitude?
On the current margin (now and in the future), how much of the most valuable work looks like contributing to an intellectual project that has been laid out (rather than doing the initial charting out of that intellectual project)?
Answers to these questions seem like they should affect: how quickly the field scales up, and how much we are trying to attract people who are excellent at crank-turn-y work vs strategy work. I lightly hold the intuition that erring on this question is a main way that this field could mess up.
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.