SHOW: A framework for shaping your talent for direct work

By Ryan Carey, cowrit­ten with Te­gan Mc­caslin (this post rep­re­sents our own opinions, and not those of our cur­rent or former em­ploy­ers)

TLDR: If your ca­reer as an EA has stalled, you’ll even­tu­ally break through if you do one (or more) of four things: gain­ing skills out­side the EA com­mu­nity, as­sist­ing the work of more se­nior EAs, find­ing valuable pro­jects that EAs aren’t will­ing to do, or find­ing pro­jects that no one is do­ing yet.

Let’s say you’ve ap­plied to, and been re­jected, from sev­eral jobs in high-im­pact ar­eas over the past year (a situ­a­tion that is be­com­ing more com­mon as the size of the move­ment grows). At first you thought you were just un­lucky, but it looks in­creas­ingly likely that your cur­rent skil­lset just isn’t com­pet­i­tive for the po­si­tions you’re ap­ply­ing to. So what’s your next move?

I pro­pose that there are four good paths open to you now:

  1. Get Skil­led: Use non-EA op­por­tu­ni­ties to level up on those abil­ities EA needs most.

  2. Get Hum­ble: Am­plify oth­ers’ im­pact from a more ju­nior role.

  3. Get Out­side: Find things to do in EA’s blind spots, or out­side EA or­ga­ni­za­tions.

  4. Get Weird: Find things no one is do­ing.

I’ve used or strongly con­sid­ered all of these strate­gies my­self, so be­fore I out­line each in more depth I’ll dis­cuss the role they’ve played in my ca­reer. (And I en­courage read­ers who res­onate with SHOW to do the same in the com­ments!) Cur­rently I do AI safety re­search for FHI. But when I first came to the EA com­mu­nity 5 years ago, my train­ing was as a doc­tor, not as a re­searcher. So when I had my first pro­fes­sional EA ex­pe­rience, as an in­tern for 80,000 Hours, my work was far from ex­traor­di­nary. As the situ­a­tion stood, I was told that I would prob­a­bly be more use­ful as a fun­der than as a re­searcher.

I figured that in the longer term, my great­est chance at hav­ing a sub­stan­tial im­pact lay in my po­ten­tial as a re­searcher, but that I would have to im­prove my maths and pro­gram­ming skills to re­al­ize that. I got skil­led by pur­su­ing a mas­ter’s de­gree in bioin­for­mat­ics, think­ing I might con­tribute to work on ge­nomics or brain em­u­la­tions.

But when I grad­u­ated, I re­al­ized I still wouldn’t be able to lead re­search on these top­ics; I didn’t yet have sub­stan­tial ex­pe­rience with the re­search pro­cess. So I got hum­ble and reached out to MIRI to see if they could use a re­search as­sis­tant. There, I worked un­der Jes­sica Tay­lor for a year, un­til the pro­ject I was in­volved in wound down. After that I reached out to sev­eral places to con­tinue do­ing AI safety work, and was ac­cepted as an in­tern and ul­ti­mately a full-time re­searcher at FHI.

Right now, I feel like I have plenty of good AI safety pro­jects to work on. But will the ideas keep flow­ing? If not, that’s to­tally fine: I can get out­side and work on se­cu­rity and policy ques­tions that EA hasn’t yet de­voted much time to, or I could dive into weird prob­lems like brain em­u­la­tion or hu­man en­hance­ment that few peo­ple any­where are work­ing on.

The fact is that EA is made up in some large part by a bunch of tal­ented gen­er­al­ists try­ing to squeeze into tiny fields, with very lit­tle su­per­vi­sion to go around. For most peo­ple, try­ing to do di­rect work will mean that you re­peat­edly hit ca­reer walls like I did, and there’s no shame in that. If any­thing, the per­sonal risk you in­cur through this pro­cess is hon­or­able and com­mend­able. Hope­fully, the SHOW frame­work will just help you go about hit­ting walls a lit­tle more effi­ca­ciously.

1. Get Skil­led (out­side of EA)

This is com­mon ad­vice for a rea­son: it’s prob­a­bly the safest and most ac­cessible path for the me­dian EA. When you con­sider that skills are gen­er­ally learned more eas­ily with su­per­vi­sion, and that most skills are trans­fer­able be­tween EA and non-EA con­texts, get­ting train­ing in the form of a grad­u­ate de­gree or a rele­vant job is an ex­cel­lent choice. This is es­pe­cially true if you broadly know what kind of work you want to do but don’t have a very spe­cific vi­sion of the par­tic­u­lars. Even if you already have skills which seem suffi­cient to you, they might not be well-suited for the roles you’re in­ter­ested in, in which case re­train­ing is prob­a­bly in or­der.

How­ever, you should make sure that what­ever train­ing scheme you have in mind will ac­tu­ally pre­pare you for what you want to do, or will oth­er­wise give you good op­tion value. Some things which may sound rele­vant won’t be, and some things which don’t will be. Mak­ing sure your goals are clear and sen­si­ble is an im­por­tant step to mak­ing this strat­egy work.

Although in the­ory it could cut down on your op­tion value, you might also want to err on the side of get­ting spe­cial­ized skills. This takes you out of the very large refer­ence class of “tal­ented EA gen­er­al­ist”, where you have to be re­ally ex­traor­di­nary to be no­ticed, into a less com­pet­i­tive pool where you have a bet­ter shot at mak­ing a sub­stan­tial con­tri­bu­tion.

80k has re­viewed jobs in nu­mer­ous high-im­pact ca­reer paths in de­tail, and makes spe­cific recom­men­da­tions about train­ing in each re­view—check the guide out if you haven’t already.

2. Get Hum­ble (by helping more se­nior EAs)

If you can find some­one who shares your goals and in­tel­lec­tual in­ter­ests, and who ac­tu­ally thinks you could be of use to them, that op­por­tu­nity is golden. For re­searchers, this prob­a­bly means be­ing a re­search as­sis­tant, while for op­er­a­tions it might mean vol­un­teer­ing to help an or­ga­nizer run events. In some cases, offer­ing to PA for a highly pro­duc­tive EA could be an enor­mous help, since the search for com­pe­tent PAs on one’s own is quite difficult.

De­spite the fact that they’re a great choice both for skill-build­ing and di­rect im­pact, these types of op­por­tu­ni­ties are un­der­val­ued for a few rea­sons. Firstly, peo­ple of­ten un­der­es­ti­mate the share of con­tri­bu­tions that top perform­ers in a field are re­spon­si­ble for. Act­ing as a “force mul­ti­plier” for one of these top peo­ple will of­ten be higher im­pact than con­tribut­ing your own in­de­pen­dent work. You might also have to take a hit to your ego to sub­or­di­nate your work to some­one else’s suc­cess, es­pe­cially if they are rel­a­tively ju­nior. (It’s worth not­ing, though, that other peo­ple will usu­ally be quite im­pressed to learn that you’ve worked un­der an­other suc­cess­ful EA.)

When I worked un­der Jes­sica Tay­lor at MIRI, we were both quite young and tech­ni­cally had the same level of cre­den­tials, and she lacked man­age­ment ex­pe­rience. But while there, in ad­di­tion to as­sist­ing high im­pact work, I got to learn a huge amount about writ­ing pa­pers, hav­ing good re­search in­tu­itions and learn­ing rele­vant math­e­mat­ics. I im­proved much faster over that pe­riod than when I was study­ing in an ML lab, which in turn was much faster than when I was tak­ing classes or self-study­ing. Or­ga­ni­za­tions like MIRI and FHI have hun­dreds of ap­pli­ca­tions per year for re­searcher roles, whereas the num­ber of peo­ple per year who ask to join as re­search as­sis­tants are some­thing like thirty times lower. Given the op­por­tu­ni­ties for skill de­vel­op­ment, free­dom and pub­li­ca­tion that re­search as­sis­tants have, I think EA re­searchers are prob­a­bly mak­ing a big mis­take by so-rarely ask­ing for these sorts of po­si­tions.

There are quite a few caveats to this strat­egy. First of all, the ca­pac­ity for this path to ab­sorb peo­ple is limited by the num­ber of ex­pe­rienced peo­ple will­ing to take on as­sis­tants, so this strat­egy won’t be a good fit for as many peo­ple as get skil­led might be. There are many rea­sons a per­son might de­cline an offer of as­sis­tance. For one, it is im­plic­itly re­quest­ing some de­gree of man­age­ment, which not ev­ery­one will have the band­width for. Even if the per­son you ap­proach does have the band­width to man­age an as­sis­tant, they may need to see a lot of ev­i­dence to con­vince them that you would add value. And in any case they may not be in a po­si­tion to offer you com­pen­sa­tion.

Nonethe­less, this path seems un­der­ex­ploited, and should be a fan­tas­tic step­ping-stone for those who are able to pull it off.

3. Get Out­side (of the con­ven­tional EA ap­proaches)

Just like any so­cial com­mu­nity, EA has in­cen­tives that aren’t perfectly al­igned with op­ti­mal re­source al­lo­ca­tion. If you un­der­stand those in­cen­tives well, you can iden­tify those ar­eas most likely to be ne­glected by EAs. And if it were im­por­tant to have an EA per­spec­tive rep­re­sented in an area, you might want to pur­sue it even if there’s a lot of non-EA work be­ing done there already.

An area might be ne­glected by EAs be­cause it’s de­val­ued by EA cul­ture/​poli­tics. EAs are mostly wealthy, ur­ban and cen­ter-left, and there may be causes which would be ap­par­ent to in­di­vi­d­u­als from other back­grounds, but are com­pletely off the radar of main­stream EAs. And some paths are avoided largely be­cause they offend EA cul­tural sen­si­bil­ities, not be­cause they lack im­pact­ful op­por­tu­ni­ties. For ex­am­ple, since EA and ra­tio­nal­ity lean to­ward star­tups, non-hi­er­ar­chi­cal struc­tures and coun­ter­cul­tural­ism, few EAs en­gage in se­cu­rity and defense. Some EAs who’ve bucked this trend have found quite a bit of suc­cess, like Ja­son Ma­theny, who served as the di­rec­tor of IARPA. From this ex­am­ple, we can see that the high­est im­pact ca­reers are of­ten not in EA or­ga­ni­za­tions at all. If you can suc­ceed in one of these ne­glected ca­reer paths, your im­pact could ul­ti­mately far out­shine the im­pact you could have had by work­ing at a “tra­di­tional” EA org.

Some­times, an ac­tivity is col­lec­tively benefi­cial but in­di­vi­d­u­ally costly. If you write an ar­ti­cle that in­cludes crit­i­cism of com­mu­nity or­ga­ni­za­tions and in­sti­tu­tions, this may be an ex­tremely valuable ser­vice, but it nonethe­less car­ries some risk of so­cial pun­ish­ment. Ex­am­ples of great ar­ti­cles re­view­ing in­sti­tu­tions in­clude the AI Align­ment Liter­a­ture Re­view and the Re­view of Ba­sic Lev­er­age Re­search Facts.

4. Get Weird (by find­ing your own bizarre niche)

Right now, pro­fes­sional EA is slow-grow­ing in terms of depth: be­cause of the way man­age­ment ca­pac­ity is bot­tle­necked, it’s of­ten difficult to get value from adding marginal gen­er­al­ist hires to a pro­ject. But there are no such limits on breadth, and if you can find some­thing to do that less than 10 peo­ple in the world are cur­rently do­ing, you can chip away at the nearly in­finite list of “things some­one should do but no one’s se­ri­ously con­sid­ered yet”.

Ten years ago, AI safety was on that list. The few peo­ple who were think­ing about it in the early days are of­ten now head­ing or­ga­ni­za­tions or pi­o­neer­ing am­bi­tious re­search pro­grams. It’s definitely not the case that all causes on that list will grow to the mag­ni­tude that AI safety has, but some will turn out to be im­por­tant, and many will be valuable in a sec­ond or­der way.

Few peo­ple are work­ing on im­pact cer­tifi­cates, vot­ing method re­form, whole brain em­u­la­tion, al­ter­na­tive foods, atom­i­cally pre­cise man­u­fac­tur­ing, or global catas­trophic biorisks. None of those are slam-dunk causes. But there’s a lot to be said for the value of in­for­ma­tion, and many sub­op­ti­mal causes will be ad­ja­cent to gen­uinely promis­ing ones. If you have a spe­cial­ized back­ground or in­ter­ests that po­si­tion you well to pur­sue the un­usual (for in­stance, if you have two dis­tinct ar­eas of ex­per­tise that aren’t of­ten com­bined), this strat­egy is made for you.

Of the four strate­gies, get­ting weird is prob­a­bly the riskiest, and the one fewest peo­ple are suited for. Pro­jects cho­sen at ran­dom from the list are over­whelm­ingly likely to be of no value what­so­ever, so you’d have to rely on your (prob­a­bly untested) abil­ity to choose well in the face of lit­tle ev­i­dence. Worse, there are ma­jor “unilat­er­al­ist’s curse” con­cerns for pro­jects that seem promis­ing but haven’t been pur­sued. Th­ese dan­gers aren’t so great that this strat­egy can’t be recom­mended to any­one, and it’s prob­a­bly worth most peo­ple’s time to come up with a short list of spec­u­la­tive pro­jects they’d be suited to work­ing on. But read­ers should be ad­vised to pro­ceed with cau­tion and seek feed­back on any hare­brained schemes.

Put­ting it all together

The four strate­gies above aren’t mu­tu­ally ex­clu­sive, and in fact com­bin­ing them where you can (and where it makes sense) may yield bet­ter re­sults than us­ing any one strat­egy on its own. I think with enough work, SHOWing can even­tu­ally pay off for most peo­ple, but it may take a while to get there. I gave a clean lit­tle story about my own ca­reer tra­jec­tory above, but be as­sured that my path was also lit­tered with false starts, re­jec­tions and failures.

If I were wiping the slate clean and start­ing my ca­reer over now, I might go through each of the four strate­gies and enu­mer­ate all the pos­si­ble op­por­tu­ni­ties open to me on each path. I could then rank these op­por­tu­ni­ties by the prob­a­bil­ity I suc­ceed in pur­su­ing them, the amount suc­cess would move me closer to my ul­ti­mate ca­reer goals, and the amount of di­rect im­pact a suc­cess would rep­re­sent. I’d also want to con­sider what kind of com­pe­ti­tion I would have for each op­por­tu­nity. Ba­si­cally, SHOW can help with the ini­tial step of gen­er­at­ing a mod­er­ately-sized list of strong op­tions, al­though the im­pact po­ten­tial of these op­tions still needs to be an­a­lyzed.

The fi­nal thing I want to share is the ex­am­ple of my cur­rent fa­vorite mu­si­cian, Brad Mehldau. He’s con­sid­ered one of the top im­pro­vi­sa­tional pi­anists, at 48. But he took a long road to the top. As a child he de­vel­oped skills in clas­si­cal pi­ano, an ideal way to prac­tice left-hand skills and mul­ti­ple har­monies. He moved to New York to study jazz, and got hum­ble, tour­ing as a side-man for a sax­o­phon­ist for 18 months. His first two albums, at age 24-25, con­sisted mostly of jazz stan­dards, and were crit­i­cised as sound­ing too much like a cer­tain leg­endary pi­anist in the jazz tra­di­tion. But with ex­pe­rience, he grew a more dis­tinc­tive voice. Nowa­days he plays a unique style of jazz that steps out­side of jazz’s usual con­fines to in­cor­po­rate pop cov­ers and el­e­ments of clas­si­cal mu­sic. He has one delight­fully weird album where each song copies one of Bach’s. Many peo­ple like him man­age to make good ca­reer de­ci­sions with­out do­ing ex­pected value calcu­la­tions at each step, in­stead choos­ing to learn im­por­tant skills, sur­round them­selves with brilli­ant peo­ple, and even­tu­ally find a niche where they can fulfill their po­ten­tial. When our best laid plans fail, we can do worse than fal­ling back on these heuris­tics.

Thanks to Howie Lem­pel for feed­back on this post, though mis­takes are ours alone.