AI Governance Career Paths for Europeans

  • This post is in­tended for cit­i­zens of Euro­pean coun­tries since so far most recom­men­da­tions in the EA com­mu­nity on AI gov­er­nance have fo­cused on the U.S.

  • I have writ­ten it in the course of think­ing about my own ca­reer. I have been think­ing about AI gov­er­nance ca­reers for about a year. I at­tended a work­shop on AI gov­er­nance ca­reers and talked to peo­ple who work in the field. This post is an amalgam of my own think­ing and con­ver­sa­tions with oth­ers. Given the scope of it, a lot of the anal­y­sis is still fairly su­perfi­cial. Much more can be said about the differ­ent paths and op­tions.

  • I pub­lish this anony­mously since there are down­sides to be­ing per­ceived as very strate­gic about one’s ca­reer choice out­side of this com­mu­nity.

Paths

The dis­tinc­tions be­tween these paths can be blurry at times and it’s pos­si­ble to switch be­tween them to some ex­tent (see the sec­tion on flex­i­bil­ity). This is a cat­e­go­riza­tion that makes sense to me. Hope­fully, it’s also helpful for you, but I don’t claim that this is the only way to carve up this space. There might also be roles that don’t fit neatly into this schema.

  • A note on China spe­cial­ists: Hav­ing ex­per­tise on China can be very helpful for some of the ca­reer paths be­low (es­pe­cially those fo­cused on for­eign policy & in­ter­na­tional se­cu­rity). Very few Euro­peans, how­ever, will be well-po­si­tioned to pur­sue a policy ca­reer in China. So I do not ex­plore it in de­tail. If you are in a good po­si­tion, you should se­ri­ously con­sider it.

  • A note on elected office/​poli­ti­cal ca­reers: I do not con­sider them in this post be­cause they will only be the best op­tion for very few peo­ple and they are not pri­mar­ily fo­cused on AI gov­er­nance. If seek­ing elected office is an ex­cel­lent fit for you, it might very well still be your best op­tion.

Re­search careers

What this path looks like:

I won’t fo­cus on this path a lot here since it’s fairly widely dis­cussed in EA. See Guide to work­ing in AI policy and strat­egy (old) for some de­tails on this path and this guide for aca­demic ca­reers more gen­er­ally (old). For these roles, your na­tion­al­ity does not mat­ter a lot. The re­search com­mu­nity is global and con­nec­tions and cre­den­tials trans­fer across coun­tries.

Why this path mat­ters:

We still don’t know very much about the field of AI gov­er­nance. There are many ques­tions still to be an­swered. Mak­ing progress here is cru­cial, and ar­guably even re­quired for policy roles to be im­pact­ful.

In­dus­try gov­er­nance careers

Policy teams in AI com­pa­nies & in­dus­try bodies

What this path looks like:

The most im­pact­ful roles on this path are ar­guably on the policy teams of OpenAI, Part­ner­ship on AI, and Deep­Mind. I don’t have a strong view about roles on the pub­lic af­fairs teams of big tech firms like Face­book, Ama­zon, Ap­ple, Microsoft, IBM, and Google seem some­what less im­pact­ful. I would wel­come more peo­ple test­ing these out.

Th­ese po­si­tions have be­come very com­pet­i­tive at this point. They of­ten re­quire sig­nifi­cant tech­ni­cal un­der­stand­ing, com­bined with good com­mu­ni­ca­tions abil­ities, and, ideally, policy ex­pe­rience (and/​or a rele­vant aca­demic back­ground). Do­ing well in any of the paths out­lined in this post for a few years will likely make you a good fit for such roles. It’s also pos­si­ble to ap­ply for them di­rectly of course.

For these roles, your na­tion­al­ity tech­ni­cally does not mat­ter, but im­mi­gra­tion to the US might be very hard in some cases, un­less you are worth the has­sle to the com­pany. The big tech firms, how­ever, have offices in Europe.

Why this path mat­ters:

Lead­ing com­pa­nies (early on) will likely be in­volved in de­ter­min­ing the gov­er­nance of AI tech­nolo­gies. (I haven’t thought about this claim a lot but it seems very plau­si­ble to me on the face of it.)

Deter­min­ing gov­er­nance struc­tures of lead­ing com­pa­nies might also be re­ally im­por­tant.

Global tech­nolog­i­cal standards

What this path looks like:

I don’t know this path well at all.

It seems like there are rele­vant roles at (1) na­tional stan­dard­iza­tion bod­ies as well as (2) in­ter­na­tional bod­ies like ISO, IEEE, IEC, ITU, and CEN/​CENELEC. I don’t have a good sense of how these or­ga­ni­za­tions work in­ter­nally, e.g., how much they rely on per­ma­nent staff com­pared to na­tional mem­bers or ex­perts who are re­cruited from com­pa­nies for tem­po­rary/​vol­un­tary com­mit­tee work.

I don’t have a good sense of ca­reer pro­gres­sion in this field.

Why this path mat­ters:

See Stan­dards for AI Gover­nance: In­ter­na­tional Stan­dards to En­able Global Co­or­di­na­tion in AI Re­search & Devel­op­ment.

Policy careers

US (na­tional se­cu­rity) policy

It’s pos­si­ble for Euro­pean na­tion­als to at­tempt an AI gov­er­nance ca­reer in the US. This path, how­ever, is very un­cer­tain and will in­volve many more ob­sta­cles than a com­pa­rable path in Europe. The biggest ob­sta­cle seems to be ini­tial im­mi­gra­tion (be­yond uni­ver­sity ed­u­ca­tion) since you need to find an em­ployer who is will­ing to spon­sor you for an H-1B visa or a green card, a pro­cess that is very re­source in­ten­sive due to countless bu­reau­cratic hur­dles and le­gal fees. Mar­ry­ing a US cit­i­zen al­lows you to ap­ply for a green card im­me­di­ately and also speeds up the wait time for nat­u­ral­iza­tion (from five to three years).

To take full ad­van­tage of this path, you should aim to get nat­u­ral­ized (af­ter ~6-10 years). This would al­low you to take on rele­vant roles in the US gov­ern­ment. Other­wise, you will be limited to roles out­side the gov­ern­ment. It is my un­der­stand­ing that it’s very com­mon for peo­ple to switch back and forth be­tween roles out­side and in­side of gov­ern­ment, so this re­stric­tion would be a sig­nifi­cant dis­ad­van­tage.

For many rele­vant gov­ern­ment roles, you will also need to ob­tain a se­cu­rity clear­ance. You should con­sider whether you fore­see any sig­nifi­cant prob­lems in this re­gard (e.g., con­tacts with na­tion­als from or travel to coun­tries like Rus­sia, China, North Korea, Iran, etc., drug use, crim­i­nal be­hav­ior, gen­eral in­tegrity/​risk-seek­ing/​dis­cre­tion, promis­cu­ity?, in gen­eral things you can get black­mailed/​pres­sured with, main­tain­ing dual cit­i­zen­ship or hav­ing worked for an­other gov­ern­ment, fi­nan­cial prob­lems, emo­tional, men­tal, and per­son­al­ity di­s­or­ders). From this per­spec­tive, this path seems par­tic­u­larly at­trac­tive for UK na­tion­als (Five Eyes) and, to a lesser ex­tent, Western Euro­pean al­lies of the US.

I have not thought a lot about US AI policy ca­reers fo­cused on com­mer­cial reg­u­la­tion and/​or sci­ence & tech policy. For what it’s worth, my im­pres­sion is that these things are, to a large ex­tent, driven by na­tional se­cu­rity con­sid­er­a­tions (at the mo­ment).

What this path looks like:

See AI gov­er­nance ca­reer in the US. Get­ting a STEM de­gree in the US prob­a­bly makes sub­se­quent im­mi­gra­tion some­what eas­ier.

Why this path mat­ters:

See AI gov­er­nance ca­reer in the US.

Euro­pean for­eign policy & in­ter­na­tional security

What this path looks like:

For­eign policy & in­ter­na­tional se­cu­rity is de­ter­mined pri­mar­ily by na­tional gov­ern­ments, and not the EU (staff) or NATO (staff), or (the staff of) other supra-/​in­ter­na­tional or­ga­ni­za­tions.

Not all Euro­pean coun­tries (and na­tional gov­ern­ments) are equally in­fluen­tial globally. For a com­par­a­tive as­sess­ment, CINC is a good start and USNews also has a power rank­ing. The coun­tries that stand out to me are Ger­many, UK, and France. Other coun­tries that also seem in­fluen­tial but per­haps sig­nifi­cantly less so: Italy, Spain, Switzer­land (es­pe­cially per cap­ita), Nether­lands, Nor­way, Swe­den.

Such ca­reers, I’d al­most always only recom­mend to na­tion­als of these coun­tries. So, for in­stance, I would gen­er­ally not recom­mend en­ter­ing Ger­man for­eign policy as a Dane.

I’m not an ex­pert on how to build a ca­reer in na­tional for­eign policy. It’s note­wor­thy that in some Euro­pean coun­tries civil ser­vice ca­reers are very dis­tinct from think tank and poli­ti­cal ca­reers (i.e., few peo­ple switch back and forth), and civil ser­vice ca­reers are in­tended to be for life (this is the case in Ger­many, for in­stance). This would im­ply much less flex­i­bil­ity and makes test­ing fit early on much more im­por­tant. At the same time, think tanks ap­pear to have sig­nifi­cantly less in­fluence than in the US (weak im­pres­sion). The best paths will also differ from coun­try to coun­try, so I can recom­mend talk­ing to ex­perts in your coun­try to learn more if pos­si­ble. If you are an ex­pert, con­sider writ­ing up your ad­vice.

Even if you’re set on en­ter­ing an in­ter­na­tional or­ga­ni­za­tion (like NATO), it still seems more ro­bust to start one’s ca­reer in the na­tional policy arena (e.g., civil ser­vice, think tank): First, one can post­pone the de­ci­sion which in­ter­na­tional or­ga­ni­za­tion to fo­cus on to some ex­tent while build­ing rele­vant ca­reer cap­i­tal. Se­cond, in­ter­na­tional or­ga­ni­za­tions mainly fa­cil­i­tate co­or­di­na­tion of na­tional gov­ern­ments. While staff at these or­ga­ni­za­tions have some in­fluence with re­gard to agenda-set­ting, plan­ning, and fore­sight, de­ci­sion-mak­ing is still in the hand of na­tional ac­tors (e.g., EU po­si­tions within the Com­mon For­eign and Se­cu­rity Policy re­quire unan­i­mous ap­proval by the mem­ber states, the North At­lantic Coun­cil con­sists of rep­re­sen­ta­tives of the mem­ber states, de­ci­sions on LAWS in the con­text of the Con­ven­tion on Cer­tain Con­ven­tional Weapons would be made by na­tional gov­ern­ments, Wasse­naar Ar­range­ment is be­tween na­tional gov­ern­ments). Third, to the ex­tent that the staff of such or­ga­ni­za­tions is im­por­tant, tran­si­tion from the na­tional arena to the in­ter­na­tional one is of­ten eas­ier than vice versa.

  • EEAS: One can be sec­onded from na­tional civil ser­vice to the Euro­pean Ex­ter­nal Ac­tion Ser­vice. This usu­ally re­sults in faster as­cen­dancy of the lad­der. I’m not aware of similar sec­ond­ments in the other di­rec­tion. Top po­si­tions of the EEAS are picked from na­tional gov­ern­ments/​policy are­nas and not EU bu­reau­crats. The re­verse does not hold for top na­tional po­si­tions.

  • NATO (civilian bod­ies): Na­tional gov­ern­ments sec­ond staff to NATO (not sure to which bod­ies ex­actly). Most top po­si­tions of the In­ter­na­tional Staff are picked or re­cruited from na­tional gov­ern­ments/​policy are­nas and not NATO bu­reau­crats. NB: I don’t know how to best en­ter/​ad­vance in the Science & Tech­nol­ogy Office.

  • UN: I do not know any­thing about the rele­vant UN bod­ies and how they re­cruit.

I’d ex­pect NATO to be more im­por­tant than the EU for ar­riv­ing at a shared doc­trine/​po­si­tion on the use of AI in the mil­i­tary (con­di­tional on NATO still ex­ist­ing when the tech­nol­ogy be­comes more ma­ture). NATO is the fo­rum where Euro­pean states co­or­di­nate with the US on defense re­lated mat­ters (e.g. Nu­clear shar­ing). A Euro­pean stance would likely mat­ter lit­tle with­out such co­or­di­na­tion, given the US ad­van­tage when it comes to AI and mil­i­tary mat­ters. This would im­ply a fo­cus on the transat­lantic re­la­tion­ship for one’s ca­reer. Ex­per­tise on China prob­a­bly also helps.

Why this path mat­ters:

I ex­pect that Euro­pean na­tions will in­fluence in­ter­na­tional regimes & norms around the de­vel­op­ment and de­ploy­ment of AI, es­pe­cially in mil­i­tary con­text. Th­ese could have im­pli­ca­tions for TAI out­comes due to path de­pen­den­cies. Some po­ten­tial lev­ers:

  • In­ter­na­tional regimes on LAWS or similar tech­nol­ogy.

  • Joint Western/​NATO stance or doc­trine on the de­vel­op­ment and use of AI in the con­text of in­ter­na­tional se­cu­rity.

  • Use­ful global AI-re­lated in­sti­tu­tions (e.g., OECD AI Ob­ser­va­tory).

  • Joint in­ter­na­tional AI de­vel­op­ment efforts (e.g., CERN-equiv­a­lent).

  • Me­di­a­tion be­tween the US and China.

Euro­pean na­tions will likely play less of a role than the US or China in any such efforts. Their con­tri­bu­tion might still be im­por­tant.

EU com­mer­cial regulation

What this path looks like:

AI reg­u­la­tory policy will be de­cided on the EU-level rather than the na­tional level (see Com­mis­sion plans here). It seems to me that work­ing im­me­di­ately in the EU ecosys­tem is the best path for this, but a start in the na­tional policy sphere might work equally well (es­pe­cially for the most im­por­tant EU coun­tries like Ger­many and France). Per­sonal con­sid­er­a­tions (e.g., net­work) might well be de­ci­sive. See AI policy ca­reers in the EU for far more de­tails.

Why this path mat­ters:

UK sci­ence & tech (& AI) policy

It is pos­si­ble for (non-UK) Euro­pean na­tion­als to pur­sue ca­reers in the UK civil ser­vice and UK policy more gen­er­ally (e.g. think tanks). Some posts in the UK civil ser­vice are re­served for UK na­tion­als (se­cu­rity and in­tel­li­gence ser­vices, the Di­plo­matic Ser­vice, the For­eign and Com­mon­wealth Office, and some other posts if a spe­cial alle­giance to the UK is deemed to be re­quired). Nat­u­ral­iza­tion in the UK seems to be pos­si­ble af­ter ~6 years. (How­ever, I’d prob­a­bly recom­mend most peo­ple to try to get nat­u­ral­ized in the US if they’re will­ing to com­mit to some nat­u­ral­iza­tion in the first place.)

I don’t con­sider other Euro­pean coun­tries’ sci­ence & tech policy be­cause they just don’t seem well-po­si­tioned to make a differ­ence, i.e., in­cu­bate cut­ting-edge AI labs to in­fluence, and reg­u­la­tion-wise they’re dom­i­nated by the EU. The UK is not in the EU and has the strongest AI ecosys­tem in Europe (incl. Deep­Mind).

What this path looks like:

Why this path mat­ters:

One can po­ten­tially in­fluence cut­ting-edge AI labs to some de­gree. UK reg­u­la­tion might also be in­fluen­tial globally, though I ex­pect it to be less in­fluen­tial than that of the US or the EU.

Sup­port & field-build­ing careers

What this path looks like:

This is not so much a clear path as a col­lec­tion of (1) sup­port roles in AI gov­er­nance or­ga­ni­za­tions and (2) AI gov­er­nance roles in effec­tive al­tru­ist or­ga­ni­za­tions. Cat­e­gory (1) in­cludes (re­search/​pro­ject) man­age­ment roles and op­er­a­tions roles. Cat­e­gory (2) is very idiosyn­cratic. Ex­am­ples in­clude some re­search an­a­lyst or grant­maker roles at the Open Philan­thropy Pro­ject, pro­ject man­age­ment roles at GovAI, and some roles at 80,000 Hours. Since there is no clear ca­reer pro­gres­sion in this path, you will have to make it up as you go along. For these roles, your na­tion­al­ity does not mat­ter.

Why this path mat­ters:

You can lev­er­age the im­pact of oth­ers in the com­mu­nity by bring­ing in more peo­ple or mak­ing peo­ple more effec­tive.

How to de­cide be­tween differ­ent paths

Broadly speak­ing, three fac­tors mat­ter: (1) how im­pact­ful you ex­pect the av­er­age ca­reer in a par­tic­u­lar path to be (in­de­pen­dent of any per­sonal con­sid­er­a­tions); (2) what your per­sonal fit and com­par­a­tive ad­van­tage for a par­tic­u­lar path is; and (3) what your start­ing point for a par­tic­u­lar path would be. Flex­i­bil­ity con­sid­er­a­tions might also af­fect your choice (see be­low).

Ex­pected im­pact of the av­er­age career

Judg­ments about this will prob­a­bly de­pend on a wide range of back­ground be­liefs. So differ­ent peo­ple will prob­a­bly have differ­ent views about this. Fac­tors that are rele­vant: (1) the po­ten­tial im­pact of differ­ent roles in the ca­reer; (2) the tractabil­ity of ca­reer pro­gres­sion; and the ne­glect­ed­ness of the path. I don’t feel com­fortable mak­ing a lot of strong claims here. Below I sketch the ones I do feel some­what con­fi­dent in:

Field-build­ing is prob­a­bly the most im­pact­ful thing to do right now. The field is still re­ally small and young at this point. Bring­ing in more of the right peo­ple is cru­cial. There are some roles that are di­rectly fo­cused on build­ing up the field or spe­cific or­ga­ni­za­tions in it. Found­ing the right or­ga­ni­za­tions is among the best things to be do­ing in this re­gard: Through his re­search and work, Allan Dafoe was able to set up GovAI that has been cru­cial for build­ing the field. Through his work, Ja­son Ma­theny was able to set up CSET that has been cru­cial for build­ing the field of AI policy in the U.S. Field-build­ing, how­ever, can­not only be done “di­rectly.” All roles and ca­reers have field-build­ing effects: pub­lish­ing re­search, con­vinc­ing other policy-mak­ers of the im­por­tance of longter­mism, etc. My im­pres­sion is that, gen­er­ally speak­ing, re­search seems to have big­ger field-build­ing effects than policy work (with some ex­cep­tions).

Ca­reers in US (na­tional se­cu­rity) policy (cur­rently) seem to be more im­pact­ful than ca­reers in Euro­pean for­eign policy & in­ter­na­tional se­cu­rity or ca­reers in UK sci­ence & tech (& AI) policy.

  • Roles in this path seem much more im­pact­ful since the US has sig­nifi­cantly more global in­fluence than any Euro­pean coun­try and is home to more ad­vanced AI ca­pa­bil­ities than any Euro­pean coun­try. It’s prob­a­bly also true that wield­ing this in­fluence will be some­what harder since the policy area is more con­tested in the US than in Europe.

  • The EA com­mu­nity also seems to be bet­ter po­si­tioned to have an in­fluence there since it seems to be eas­ier to ad­vance in one’s ca­reer and there are more roles available in gov­ern­ment and think tanks.

  • Ne­glect­ed­ness con­sid­er­a­tions push in fa­vor of Euro­pean coun­tries, but cur­rently the bal­ance to me still seems to be in fa­vor of the US. If, in a few years, there are hun­dreds of EAs try­ing to af­fect US policy in this area but still only very few in Europe, the bal­ance might flip. Other than that, I don’t think there are many things I could plau­si­bly learn that would change my mind.

  • I’m not sure how I think about the com­par­i­son for some­body who could plau­si­bly get into a po­si­tion that al­lowed them to build the field in such a more ne­glected ju­ris­dic­tion (prob­a­bly not the UK) on a large scale.

More weakly: Ca­reers in US (na­tional se­cu­rity) policy (cur­rently) seem to be more im­pact­ful than ca­reers in EU com­mer­cial reg­u­la­tion of AI.

  • The EU’s in­fluence on lead­ing AI de­vel­op­ers is not clear at the mo­ment. I ex­pect to be less over­all than that of the US gov­ern­ment. It’d be sur­pris­ing if a for­eign gov­ern­ment had more in­fluence. In the EU, the policy win­dow for AI reg­u­la­tion is rapidly clos­ing since they will soon pass some leg­is­la­tion. There will still be de­ci­sions to be made af­ter­wards but they will be less de­ci­sive prob­a­bly. The next ma­jor piece of leg­is­la­tion will prob­a­bly only be passed af­ter 10 to 20 years (GDPR was passed ~20 years af­ter the pre­vi­ous ma­jor pri­vacy di­rec­tive). I ex­pect AI to re­main on the na­tional se­cu­rity agenda of the US for a long time.

  • Na­tional se­cu­rity seems more hi­er­ar­chi­cal, con­tested, and top-down than reg­u­la­tory af­fairs. So reach­ing rele­vant roles in the EU is ar­guably eas­ier. How­ever, there are prob­a­bly also far fewer rele­vant roles available.

  • Ne­glect­ed­ness con­sid­er­a­tions push in fa­vor of the EU, but cur­rently the bal­ance to me still seems to be in fa­vor of the US. If, in a few years, there are hun­dreds of EAs try­ing to af­fect US policy in this area but still only very few in Europe, the bal­ance might flip.

  • I’m not sure how I think about the com­par­i­son for some­body who could plau­si­bly get into a po­si­tion that al­lowed them to build the field in the EU on a scale similar to other efforts in the com­mu­nity.

Per­sonal fit and com­par­a­tive advantage

80,000 Hours define per­sonal fit roughly as “your chances of ex­cel­ling in the job.” You can read more about com­par­a­tive ad­van­tage in this con­text here.

Per­sonal fit for the spe­cific paths:

  • Re­search ca­reers: 80,000 Hours has some thoughts on it here and here (old).

  • In­dus­try gov­er­nance ca­reers: My un­der­stand­ing is that per­sonal fit will be similar to policy ca­reers (see be­low) with a stronger fo­cus on tech­ni­cal skills, back­ground, and in­ter­est. You will also tend to work in tech com­pa­nies (or ad­ja­cent or­ga­ni­za­tions) as op­posed to large gov­ern­ment bu­reau­cra­cies (or ad­ja­cent or­ga­ni­za­tions), which will also in­fluence your per­sonal fit.

  • Policy ca­reers: 80,000 Hours has some thoughts on it here (US spe­cific), here (US Congress spe­cific), here (old), and here (old). Spe­cific things might be rele­vant for differ­ent ju­ris­dic­tions but I’m not an ex­pert.

  • Sup­port & field-build­ing ca­reers: Some in­sights from 80,000 Hours ca­reer pro­files are prob­a­bly some­what trans­fer­able: effec­tive al­tru­ist or­ga­ni­za­tions (old), effec­tive non­prof­its (old), op­er­a­tions man­age­ment, and foun­da­tion grant­maker (old).

Start­ing point

Your start­ing point on a par­tic­u­lar path is a func­tion of (1) your ex­ist­ing ca­reer cap­i­tal for that par­tic­u­lar path, and (2) the ca­reer cap­i­tal re­quire­ments for that par­tic­u­lar path. You can learn more about ca­reer cap­i­tal from 80,000 Hours.

Ca­reer cap­i­tal “re­quire­ments” for spe­cific paths:

  • Re­search ca­reers: 80,000 Hours has some in­sights on aca­demic re­search ca­reers here. Get­ting a PhD (or equiv­a­lent) in a rele­vant field is prob­a­bly good. Cre­den­tials play less of a role for some EA(-ad­ja­cent) re­search or­ga­ni­za­tions (e.g., FHI). Re-en­ter­ing academia af­ter ex­ten­sive out­side of it is ap­par­ently of­ten difficult. “Hack­ing the sys­tem” to jump rungs seems hard since gate­keep­ing is in­sti­tu­tion­al­ized.

  • In­dus­try gov­er­nance ca­reers: My im­pres­sion is that one can en­ter this path from the tech­ni­cal or the policy side. I’d ex­pect con­nec­tions to mat­ter a lot for the most im­por­tant or­ga­ni­za­tions (like Deep­Mind, OpenAI, PAI) since they’re still quite young, in­for­mal, and grow­ing. Cre­den­tials will be­come in­creas­ingly im­por­tant though.

  • Policy ca­reers: See here for US policy ca­reers, here for UK civil ser­vice, and here for think tanks (old). Cre­den­tials seem more im­por­tant in con­ti­nen­tal Europe com­pared to the UK and the US: think tanks in con­ti­nen­tal Europe seem to re­quire at least a grad­u­ate de­gree, and of­ten a PhD. Even civil ser­vice ca­reers in some cases re­quire a (rele­vant) grad­u­ate de­gree (at least in Ger­many). Con­nec­tions mat­ter com­par­a­tively less for civil ser­vice ca­reers since ad­mis­sions are very stan­dard­ized.

  • Sup­port & field-build­ing ca­reers: Skills and con­nec­tions mat­ter most here, and are heav­ily de­pen­dent on the role in ques­tion. Cre­den­tials are com­par­a­tively less im­por­tant.

Flex­i­bil­ity considerations

  • It’s gen­er­ally eas­ier to tran­si­tion from re­search to any of the other paths than vice versa.

  • I’d ex­pect that tran­si­tions be­tween in­dus­try gov­er­nance and policy to be eas­ier in the US (and maybe the UK) than in con­ti­nen­tal Europe where this re­volv­ing door tends to be viewed with more sus­pi­cion.

  • A lot of policy ca­reer cap­i­tal is lo­cal­ized and does not trans­fer well to other coun­tries/​ju­ris­dic­tions. For in­stance, a strong net­work and de­cent policy ex­per­tise in Ber­lin will help you sig­nifi­cantly less for land­ing a new job in Wash­ing­ton D.C. com­pared to a new one in Ber­lin.

  • The ca­reer cap­i­tal of sup­port & field-build­ing roles will de­pend a lot on the spe­cific role. Some will offer lit­tle and oth­ers a lot of ca­reer cap­i­tal for the other paths. I’d ex­pect any ex­pe­rience in the other paths to be helpful for en­ter­ing this path.