Some promising career ideas beyond 80,000 Hours’ priority paths

This is a sister post to “Prob­lem ar­eas be­yond 80,000 Hours’ cur­rent pri­ori­ties”.

Introduction

In this post, we list some more ca­reer op­tions be­yond our pri­or­ity paths that seem promis­ing to us for pos­i­tively in­fluenc­ing the long-term fu­ture.

Some of these are likely to be writ­ten up as pri­or­ity paths in the fu­ture, or wrapped into ex­ist­ing ones, but we haven’t writ­ten full pro­files for them yet—for ex­am­ple policy ca­reers out­side AI and biose­cu­rity policy that seem promis­ing from a longter­mist per­spec­tive.

Others, like in­for­ma­tion se­cu­rity, we think might be as promis­ing for many peo­ple as our pri­or­ity paths, but be­cause we haven’t in­ves­ti­gated them much we’re still un­sure.

Still oth­ers seem like they’ll typ­i­cally be less im­pact­ful than our pri­or­ity paths for peo­ple who can suc­ceed equally in ei­ther, but still seem high-im­pact to us and like they could be top op­tions for a sub­stan­tial num­ber of peo­ple, de­pend­ing on per­sonal fit—for ex­am­ple re­search man­age­ment.

Fi­nally some—like be­com­ing a pub­lic in­tel­lec­tual—clearly have the po­ten­tial for a lot of im­pact, but we can’t recom­mend them widely be­cause they don’t have the ca­pac­ity to ab­sorb a large num­ber of peo­ple, are par­tic­u­larly risky, or both.

We com­piled this list by ask­ing 6 ad­visers about paths they think more peo­ple in the effec­tive al­tru­ism com­mu­nity should ex­plore, and which ca­reer ideas they think are cur­rently un­der­val­ued—in­clud­ing by 80,000 Hours. In par­tic­u­lar, we were look­ing for paths that seem like they may be promis­ing from the per­spec­tive of pos­i­tively shap­ing the long-term fu­ture, but which aren’t already cap­tured by as­pects of our pri­or­ity paths. If some­thing was sug­gested twice and also met those crite­ria, we took that as a pre­sump­tion in fa­vor of in­clud­ing it. We then spent a lit­tle time look­ing into each one and put to­gether a few thoughts and re­sources for those that seemed most promis­ing. The re­sult is the list be­low.

We’d be ex­cited to see more of our read­ers ex­plore these op­tions, and plan on look­ing into them more our­selves.

Who is best suited to pur­sue these paths? Of course the an­swer is differ­ent for each one, but in gen­eral pur­su­ing a ca­reer where less re­search has been done on how to have a large im­pact within it—es­pe­cially if few of your col­leagues will share your per­spec­tive on how to think about im­pact—may re­quire you to think es­pe­cially crit­i­cally and cre­atively about how you can do an un­usual amount of good in that ca­reer. Ideal can­di­dates, then, would be self-mo­ti­vated, cre­ative, and in­clined to think rigor­ously and of­ten about how they can steer to­ward the high­est im­pact op­tions for them—in ad­di­tion to hav­ing strong per­sonal fit for the work.

What are the pros and cons of each of these paths? Which are less promis­ing than they might at first ap­pear? What par­tic­u­lar routes within each one are the most promis­ing and which are the least? What es­pe­cially promis­ing high-im­pact ca­reer ideas is this list miss­ing?

We’re ex­cited to read peo­ple’s re­ac­tions in the com­ments. And we hope that for peo­ple who want to pur­sue paths out­side those we talk most about, this list can give them some fruit­ful ideas to ex­plore.

Ca­reer ideas we’re par­tic­u­larly ex­cited about be­yond our pri­or­ity paths

Be­come a his­to­rian fo­cus­ing on large so­cietal trends, in­flec­tion points, progress, or collapse

We think it could be high-im­pact to study sub­jects rele­vant to the long-term arc of his­tory—e.g, eco­nomic, in­tel­lec­tual, or moral progress from a long-term per­spec­tive, the his­tory of so­cial move­ments or philan­thropy, or the his­tory of wellbe­ing. Bet­ter un­der­stand­ing long trends and key in­flec­tion points, such as the in­dus­trial rev­olu­tion, may help us un­der­stand what could cause other im­por­tant shifts in the fu­ture (see more promis­ing top­ics).

Our im­pres­sion is that al­though many of these top­ics have re­ceived at­ten­tion from his­to­ri­ans and other aca­demics (ex­am­ples: 1, 2, 3, 4, 5), some are com­par­a­tively ne­glected, es­pe­cially from a more quan­ti­ta­tive or im­pact-fo­cused per­spec­tive.

In gen­eral, there seem to be a num­ber of gaps that skil­led his­to­ri­ans, an­thro­pol­o­gists, or eco­nomic his­to­ri­ans could help fill. Re­veal­ingly, the Open Philan­thropy Pro­ject com­mis­sioned their own stud­ies of the his­tory and suc­cesses of philan­thropy be­cause they couldn’t find much ex­ist­ing liter­a­ture that met their needs. Most ex­ist­ing re­search is not aimed at de­riv­ing ac­tion-rele­vant les­sons.

How­ever, this is a highly com­pet­i­tive path, which is not able to ab­sorb many peo­ple. Although there may be some op­por­tu­ni­ties to do this kind of his­tor­i­cal work in foun­da­tions, or to get it funded through pri­vate grants, pur­su­ing this path would in most cases in­volve seek­ing an aca­demic ca­reer. Academia gen­er­ally has a short­age of po­si­tions, and es­pe­cially in the hu­man­i­ties of­ten doesn’t provide many backup op­tions. It seems less risky to pur­sue his­tor­i­cal re­search as an economist, since an eco­nomic PhD does give you other promis­ing op­tions.

How can you es­ti­mate your chance of suc­cess as a his­tory aca­demic? We haven’t looked into the fields rele­vant to his­tory in par­tic­u­lar, but some of our dis­cus­sion of par­allel ques­tions for philos­o­phy academia or academia in gen­eral may be use­ful.

It may also be pos­si­ble to pur­sue this kind of his­tor­i­cal re­search in ‘non-tra­di­tional academia,’ such as at groups like the Fu­ture of Hu­man­ity In­sti­tute or Global Pri­ori­ties In­sti­tute. Learn more about the Global Pri­ori­ties In­sti­tute by listen­ing to our pod­cast epi­sode with Michelle Hutch­in­son.

Be­come a spe­cial­ist on Rus­sia or India

We’ve ar­gued that be­cause of China’s poli­ti­cal, mil­i­tary, eco­nomic, and tech­nolog­i­cal im­por­tance on the world stage, helping west­ern or­ga­ni­za­tions bet­ter un­der­stand and co­op­er­ate with Chi­nese ac­tors might be highly im­pact­ful.

We think work­ing with China rep­re­sents a par­tic­u­larly promis­ing path to im­pact. But a similar ar­gu­ment could be made for gain­ing ex­per­tise in other pow­er­ful na­tions, such as Rus­sia or In­dia.

This is likely to be a bet­ter op­tion for you if you are from or have spent a sub­stan­tial amount of time in these coun­tries. There’s a real need for peo­ple with a deep un­der­stand­ing of their cul­tures and in­sti­tu­tions, as well as fluency in the rele­vant lan­guages (e.g. at the level where one might write a news­pa­per ar­ti­cle about longter­mism in Rus­sian).

If you are not from one of these coun­tries, one way to get started might be to pur­sue area or lan­guage stud­ies in the rele­vant coun­try (one source of sup­port available for US stu­dents is the For­eign Lan­guage and Area Stud­ies schol­ar­ship pro­gramme), per­haps alongside eco­nomics or in­ter­na­tional re­la­tions. You could also start by work­ing in policy in your home coun­try and slowly con­cen­trate more and more on is­sues re­lated to Rus­sia or In­dia, or try to work in philan­thropy or di­rectly on a top prob­lem in one of these coun­tries.

There are likely many differ­ent promis­ing op­tions in this area, both for long-term ca­reer plans and use­ful next steps. Though they would of course have to be adapted to the lo­cal con­text, some of the op­tions laid out in our ar­ti­cle on be­com­ing a spe­cial­ist in China could have promis­ing par­allels in other na­tional con­texts as well.

Be­come an ex­pert in AI hardware

Ad­vances in hard­ware, such as the de­vel­op­ment of more effi­cient, spe­cial­ized chips, have played an im­por­tant role in rais­ing the perfor­mance of AI sys­tems and al­low­ing them to be used eco­nom­i­cally.

There is a com­mon­sense ar­gu­ment that if AI is an es­pe­cially im­por­tant tech­nol­ogy, and hard­ware is an im­por­tant in­put in the de­vel­op­ment and de­ploy­ment of AI, spe­cial­ists who un­der­stand AI hard­ware will have op­por­tu­ni­ties for im­pact—even if we can’t fore­see ex­actly the form they will take.

Some ways hard­ware ex­perts may be able to help pos­i­tively shape the de­vel­op­ment of AI in­clude:

  • More ac­cu­rately fore­cast­ing progress in the ca­pa­bil­ities of AI sys­tems, for which hard­ware is a key and rel­a­tively quan­tifi­able in­put.

  • Ad­vis­ing poli­cy­mak­ers on hard­ware is­sues, such as ex­port, im­port, and man­u­fac­tur­ing poli­cies for spe­cial­ized chips. (Read a rele­vant is­sue brief from CSET.)

  • Helping AI pro­jects in mak­ing cred­ible com­mit­ments by al­low­ing them to ver­ifi­ably demon­strate the com­pu­ta­tional re­sources they’re us­ing.

  • Helping ad­vise and fulfill the hard­ware needs for safety-ori­ented AI labs.

Th­ese ideas are just ex­am­ples of ways hard­ware spe­cial­ists might be helpful. We haven’t looked into this area very much, so we are pretty un­sure about the mer­its of differ­ent ap­proaches, which is why we’ve listed work­ing in AI hard­ware here in­stead of as a part of the AI tech­ni­cal safety and policy pri­or­ity paths.

We also haven’t come across re­search lay­ing out spe­cific strate­gies in this area, so pur­su­ing this path would likely mean both de­vel­op­ing skills and ex­pe­rience in hard­ware and think­ing cre­atively about op­por­tu­ni­ties to have an im­pact in the area. If you do take this path, we en­courage you to think care­fully through the im­pli­ca­tions of your plans, ideally in col­lab­o­ra­tion with strat­egy and policy ex­perts also fo­cused on cre­at­ing safe and benefi­cial AI.

In­for­ma­tion security

Re­searchers at the Open Philan­thropy Pro­ject have ar­gued that bet­ter in­for­ma­tion se­cu­rity is likely to be­come in­creas­ingly im­por­tant in the com­ing years. As pow­er­ful tech­nolo­gies like bio­eng­ineer­ing and ma­chine learn­ing ad­vance, im­proved se­cu­rity will likely be needed to pro­tect these tech­nolo­gies from mi­suse, theft, or tam­per­ing. More­over, the au­thors have found few se­cu­rity ex­perts already in the field who fo­cus on re­duc­ing catas­trophic risks, and pre­dict there will be high de­mand for them over the next 10 years.

In a re­cent pod­cast epi­sode, Bruce Sch­neier also ar­gued that ap­pli­ca­tions of in­for­ma­tion se­cu­rity will be­come in­creas­ingly cru­cial, al­though he pushed back on the spe­cial im­por­tance of se­cu­rity for AI and biorisk in par­tic­u­lar.

We would like to see more peo­ple in­ves­ti­gat­ing these is­sues and pur­su­ing in­for­ma­tion se­cu­rity ca­reers as a path to so­cial im­pact. One op­tion would be to try to work on se­cu­rity is­sues at a top AI lab, in which case the prepa­ra­tion might be similar to the prepa­ra­tion for AI safety work in gen­eral, but with a spe­cial fo­cus on se­cu­rity. Another op­tion would be to pur­sue a se­cu­rity ca­reer in gov­ern­ment or a large tech com­pany with the goal of even­tu­ally work­ing on a pro­ject rele­vant to a par­tic­u­larly press­ing area. In some cases we’ve heard it’s pos­si­ble for peo­ple who start as en­g­ineers to train in in­for­ma­tion se­cu­rity at large tech com­pa­nies that have sig­nifi­cant se­cu­rity needs.

Com­pen­sa­tion is usu­ally higher in the pri­vate sec­tor. But if you want to work even­tu­ally on clas­sified pro­jects, it may be bet­ter to pur­sue a pub­lic sec­tor ca­reer as it may bet­ter pre­pare you to even­tu­ally earn a high level of se­cu­rity clear­ance.

There are cer­tifi­ca­tions for in­for­ma­tion se­cu­rity, but it may be bet­ter to get started by in­ves­ti­gat­ing on your own the de­tails of the sys­tems you want to pro­tect, and/​or par­ti­ci­pat­ing in pub­lic ‘cap­ture the flag’ cy­ber­se­cu­rity com­pe­ti­tions. At the un­der­grad­u­ate level, it seems par­tic­u­larly helpful for many ca­reers in this area to study CS and statis­tics.

In­for­ma­tion se­cu­rity isn’t listed as a pri­or­ity path be­cause we haven’t spent much time in­ves­ti­gat­ing how peo­ple work­ing in the area can best suc­ceed and have a big pos­i­tive im­pact. Still, we think there are likely to be ex­cit­ing op­por­tu­ni­ties in the area, and if you’re in­ter­ested in pur­su­ing this ca­reer path, or already have ex­pe­rience in in­for­ma­tion se­cu­rity, we’d be in­ter­ested to talk to you. Fill out this form, and we will get in touch if we come across op­por­tu­ni­ties that seem like a good fit for you.

Be­come a pub­lic intellectual

Some peo­ple seem to have a very large pos­i­tive im­pact by be­com­ing pub­lic in­tel­lec­tu­als and pop­u­lariz­ing im­por­tant ideas—of­ten through writ­ing books, giv­ing talks or in­ter­views, or writ­ing blogs, columns, or open let­ters.

How­ever, it’s prob­a­bly even harder to be­come a suc­cess­ful and im­pact­ful pub­lic in­tel­lec­tual than a suc­cess­ful aca­demic, since be­com­ing a pub­lic in­tel­lec­tual of­ten re­quires a de­gree of suc­cess within academia while also hav­ing ex­cel­lent com­mu­ni­ca­tion skills and spend­ing sig­nifi­cant time build­ing a pub­lic pro­file. Thus this path seems to us to be es­pe­cially com­pet­i­tive and a good fit for only a small num­ber of peo­ple.

As with other ad­vo­cacy efforts, it also seems rel­a­tively easy to ac­ci­den­tally do harm if you pro­mote mis­taken ideas, or even pro­mote im­por­tant ideas in a way that turns peo­ple off. (Read more about how to avoid ac­ci­den­tally do­ing harm.)

That said, this path seems like it could be ex­tremely im­pact­ful for the right per­son. We think build­ing aware­ness of cer­tain global catas­trophic risks, of the po­ten­tial effects of our ac­tions on the long-term fu­ture, or of effec­tive al­tru­ism might be es­pe­cially high value, as well as spread­ing pos­i­tive val­ues like con­cern for for­eign­ers, non­hu­man an­i­mals, fu­ture peo­ple, or oth­ers.

There are pub­lic in­tel­lec­tu­als who are not aca­demics—such as promi­nent blog­gers, jour­nal­ists and au­thors. How­ever, academia seems un­usu­ally well-suited for be­com­ing a pub­lic in­tel­lec­tual be­cause academia re­quires you to be­come an ex­pert in some­thing and trains you to write (a lot), and the high stan­dards of academia provide cred­i­bil­ity for your opinions and work. For these rea­sons, if you are in­ter­ested in pur­su­ing this path, go­ing into academia may be a good place to start.

Public in­tel­lec­tu­als can come from a va­ri­ety of dis­ci­plines—what they have in com­mon is that they find ways to ap­ply in­sights from their fields to is­sues that af­fect many peo­ple, and they com­mu­ni­cate these in­sights effec­tively.

If you are an aca­demic, ex­per­i­ment with spread­ing im­por­tant ideas on a small scale through a blog, mag­a­z­ine, or pod­cast. If you share our pri­ori­ties and are hav­ing some suc­cess with these ex­per­i­ments, we’d be es­pe­cially in­ter­ested in talk­ing to you about your plans.

Journalism

For the right per­son, be­com­ing a jour­nal­ist seems like it could be highly valuable for many of the same rea­sons be­ing a pub­lic in­tel­lec­tual might be.

Good jour­nal­ists keep the pub­lic in­formed and help pos­i­tively shape pub­lic dis­course by spread­ing ac­cu­rate in­for­ma­tion on im­por­tant top­ics. And al­though the news me­dia tend to fo­cus more on cur­rent events, jour­nal­ists also of­ten provide a plat­form for peo­ple and ideas that the pub­lic might not oth­er­wise hear about.

How­ever, this path is also very com­pet­i­tive, es­pe­cially when it comes to the kinds of work that seem best for com­mu­ni­cat­ing im­por­tant ideas (which are of­ten com­plex), i.e., writ­ing long-form ar­ti­cles or books, pod­casts, and doc­u­men­taries. And like be­ing a pub­lic in­tel­lec­tual, it seems rel­a­tively easy to do make things worse as a jour­nal­ist by di­rect­ing peo­ple’s at­ten­tion in the wrong way, so this path may re­quire es­pe­cially good judge­ment about which pro­jects to pur­sue and with what strat­egy. We there­fore think jour­nal­ism is likely to be a good fit for only a small num­ber of peo­ple.

Check out our in­ter­view with Kel­sey Piper of Vox’s Fu­ture Perfect to learn more.

Policy ca­reers that are promis­ing from a longter­mist perspective

There is likely a lot of policy work with the po­ten­tial to pos­i­tively af­fect the long run fu­ture that doesn’t fit into ei­ther of our pri­or­ity paths of AI policy or biorisk policy.

We aren’t sure what it might be best to ul­ti­mately aim for in policy out­side these ar­eas. But work­ing in an area that is plau­si­bly im­por­tant for safe­guard­ing the long-term fu­ture seems like a promis­ing way of build­ing knowl­edge and ca­reer cap­i­tal so that you can judge later what policy in­ter­ven­tions seem most promis­ing for you to pur­sue.

Pos­si­ble ar­eas in­clude:

See our prob­lem pro­files page for more is­sues, some of which you might be able to help ad­dress through a policy-ori­ented ca­reer.

There is a spec­trum of op­tions for mak­ing progress on policy, rang­ing from re­search to work out which pro­pos­als make sense, to ad­vo­cacy for spe­cific pro­pos­als, to im­ple­men­ta­tion. (See our write-up on gov­ern­ment and policy ca­reers for more on this topic.)

It seems likely to us that many lines of work within this broad area could be as im­pact­ful as our pri­or­ity paths, but we haven’t in­ves­ti­gated enough to be con­fi­dent about the most promis­ing op­tions or the best routes in. We hope to be able to provide more spe­cific guidance in this area in the fu­ture.

Be re­search man­ager or a PA for some­one do­ing re­ally valuable work

Some peo­ple may be ex­traor­di­nar­ily pro­duc­tive com­pared to the av­er­age. (Read about this phe­nomenon in re­search ca­reers.). But these peo­ple of­ten have to use much of their time on work that doesn’t take the best ad­van­tage of their skills, such as bu­reau­cratic and ad­minis­tra­tive tasks. This may be es­pe­cially true for peo­ple who work in uni­ver­sity set­tings, as many re­searchers do, but it is also of­ten true of en­trepreneurs, poli­ti­ci­ans, writ­ers, and pub­lic in­tel­lec­tu­als.

Act­ing as a per­sonal as­sis­tant can dra­mat­i­cally in­crease these peo­ples’ im­pact. By sup­port­ing their day-to-day ac­tivi­ties and free­ing up more of their time for work that other peo­ple can’t do, you can act as a ‘mul­ti­plier’ on their pro­duc­tivity. We think a highly tal­ented per­sonal as­sis­tant can make some­one 10% more pro­duc­tive, or per­haps more, which is like hav­ing a tenth (or more) as much im­pact as they would have. If you’re work­ing for some­one do­ing re­ally valuable work, that’s a lot.

A re­lated path is work­ing in re­search man­age­ment. Re­search man­agers help pri­ori­tize re­search pro­jects within an in­sti­tu­tion and help co­or­di­nate re­search, fundrais­ing, and com­mu­ni­ca­tions to make the in­sti­tu­tion more im­pact­ful. Read more here. In gen­eral, be­ing a PA or a re­search man­ager seems valuable for many of the same rea­sons work­ing in op­er­a­tions man­age­ment does—these co­or­di­nat­ing and sup­port­ing roles are cru­cial for en­abling re­searchers and oth­ers to have the biggest pos­i­tive im­pact pos­si­ble.

Be­come an ex­pert on for­mal verification

‘Proof as­sis­tants’ are pro­grams used to for­mally ver­ify that com­puter sys­tems have var­i­ous prop­er­ties—for ex­am­ple that they are se­cure against cer­tain cy­ber­at­tacks—and to help de­velop pro­grams that are for­mally ver­ifi­able in this way.

Cur­rently, proof as­sis­tants are not very highly de­vel­oped, but the abil­ity to cre­ate pro­grams that can be for­mally ver­ified to have im­por­tant prop­er­ties seems like it could be helpful for ad­dress­ing a va­ri­ety of is­sues, per­haps in­clud­ing AI safety and cy­ber­se­cu­rity. So im­prov­ing proof as­sis­tants seems like it could be very high-value.

For ex­am­ple, it might be pos­si­ble to use proof as­sis­tants to help solve the AI ‘al­ign­ment prob­lem’ by cre­at­ing AI sys­tems that we can prove have cer­tain prop­er­ties we think are re­quired for the AI sys­tem to re­li­ably do what we want it to do. Alter­na­tively, we may be able to use proof as­sis­tants to gen­er­ate pro­grams that we need to solve some sub-part of the prob­lem. (Read our ca­reer re­view of re­search­ing risks from AI)

We haven’t looked into for­mal ver­ifi­ca­tion yet much, but both fur­ther re­search in this area as well as ap­ply­ing ex­ist­ing tech­niques to im­por­tant is­sues seem po­ten­tially promis­ing to us. You can en­ter this path by study­ing for­mal ver­ifi­ca­tion at the un­der­grad­u­ate or grad­u­ate level, or learn­ing about it in­de­pen­dently if you have a back­ground in com­puter sci­ence. Jobs in this area ex­ist both in in­dus­try and in academia.

Use your skills to meet a need in the effec­tive al­tru­ism community

As a part of this com­mu­nity, we may have some bias here, but we think helping to build the com­mu­nity and make it more effec­tive might be one way to do a lot of good. More­over, un­like other paths on this list, it might be pos­si­ble to do this part time while you also learn about other ar­eas.

There are many ways of helping build and main­tain the effec­tive al­tru­ism com­mu­nity that don’t in­volve work­ing within an effec­tive al­tru­ism or­gani­sa­tions, such as con­sult­ing for one of these or­ga­ni­za­tions, pro­vid­ing le­gal ad­vice, or helping effec­tive al­tru­ist au­thors with book pro­mo­tion.

Within this set of roles, we’d es­pe­cially like to high­light or­ga­niz­ing stu­dent and lo­cal effec­tive al­tru­ism groups. Our ex­pe­rience sug­gests that these groups can be very use­ful re­sources for peo­ple to learn more about differ­ent global prob­lems and con­nect with oth­ers who share their con­cerns (more re­sources for lo­cal groups).

We think these roles are good to pur­sue in par­tic­u­lar if you are very fa­mil­iar with the effec­tive al­tru­ism com­mu­nity and you already have the rele­vant skills and are keen to bring them to bear in a more im­pact­ful way.

Non­profit entrepreneurship

If you can find a way to ad­dress a key bot­tle­neck to progress in a press­ing prob­lem area which hasn’t been tried or isn’t be­ing cov­ered by an effec­tive or­gani­sa­tion, start­ing one of your own can be ex­tremely valuable.

That said, this path seems to us to be par­tic­u­larly high-risk, which is why we don’t list it as a pri­or­ity path. Most new or­ga­ni­za­tions strug­gle, and non-profit en­trepreneur­ship can of­ten be even more difficult than for-profit en­trepreneur­ship. Set­ting up a new or­gani­sa­tion will also likely in­volve di­vert­ing re­sources from other or­gani­sa­tions, which means it’s eas­ier than it seems to set the area back. The risks are greater if you’re one of the first or­ga­ni­za­tions in an area, as you could put off oth­ers from work­ing on the is­sue, es­pe­cially if you make poor progress (al­though this has to be bal­anced against the greater in­for­ma­tion value of ex­plor­ing an un­charted area).

In gen­eral, we wouldn’t recom­mend that some­one start off by aiming to set up a new or­gani­sa­tion. Rather, we’d recom­mend start­ing by learn­ing about and work­ing within a press­ing prob­lem area, and then if through the course of that work you come across a gap, and that gap can’t be solved by an ex­ist­ing or­gani­sa­tion, then con­sider found­ing a new one. Or­gani­sa­tions de­vel­oped more or­gan­i­cally like this, and which are driven by the needs of a spe­cific prob­lem area, usu­ally seem to be much more promis­ing.

There is far more to say about the ques­tion of whether to start a new or­gani­sa­tion, and how to com­pare differ­ent non-profit ideas and other al­ter­na­tives. A great deal de­pends on the de­tails of your situ­a­tion, mak­ing it hard for us to give gen­eral ad­vice on the topic.

If you think you may have found a gap for an or­gani­sa­tion within one of our pri­or­ity prob­lem ar­eas, or prob­lem ar­eas that seem promis­ing that we haven’t in­ves­ti­gated yet, then we’d be in­ter­ested to speak to you.

Even if you don’t have an idea right now, if you’re in­ter­ested in spear­head­ing new pro­jects fo­cus­ing on im­prov­ing the long-run fu­ture you might find it thought-pro­vok­ing and helpful to fill out this sur­vey for peo­ple in­ter­ested in longter­mist en­trepreneur­ship, run by Jade Le­ung as part of a pro­ject sup­ported by Open Philan­thropy.

You might also be in­ter­ested in check­ing out these re­sources on effec­tive non­prof­its, or the or­ga­ni­za­tion Char­ity En­trepreneur­ship, es­pe­cially if you’re in­ter­ested in global health or an­i­mal welfare.

Non-tech­ni­cal roles in lead­ing AI labs

Although we think tech­ni­cal AI safety re­search and AI policy are par­tic­u­larly im­pact­ful, we think hav­ing very tal­ented peo­ple fo­cused on safety and so­cial im­pact at top AI labs may also be very valuable, even when they aren’t in tech­ni­cal or policy roles.

For ex­am­ple, you might be able to shift the cul­ture around AI more to­ward safety and pos­i­tive so­cial im­pact by talk­ing pub­li­cly about what your or­ga­ni­za­tion is do­ing to build safe and benefi­cial AI (ex­am­ple from Deep­Mind), helping re­cruit safety-minded re­searchers, de­sign­ing in­ter­nal pro­cesses to con­sider so­cial im­pact is­sues more sys­tem­at­i­cally in re­search, or helping differ­ent teams co­or­di­nate around safety-rele­vant pro­jects.

We’re not sure which roles are best, but in gen­eral ones in­volved in strat­egy, ethics, or com­mu­ni­ca­tions seem promis­ing. Or you can pur­sue a role that makes an AI lab’s safety team more effec­tive—like in op­er­a­tions or pro­ject man­age­ment.

That said, it seems pos­si­ble that some such roles could have a ve­neer of con­tribut­ing to AI safety with­out do­ing much to head off bad out­comes. For this rea­son it seems par­tic­u­larly im­por­tant here to con­tinue to think crit­i­cally and cre­atively about what kinds of work in this area are use­ful.

Some roles in this space may also provide strong ca­reer cap­i­tal for work­ing in AI policy by putting you in a po­si­tion to learn about the work these labs are do­ing, as well as the strate­gic land­scape in AI.

Create or man­age a long-term philan­thropic fund

Some of the best op­por­tu­ni­ties for mak­ing a differ­ence may lie far in the fu­ture. In that case, in­vest­ing re­sources now in or­der to have many more re­sources available at that fu­ture time might be ex­tremely valuable.

How­ever, right now we have no way of effec­tively and se­curely in­vest­ing re­sources over such long time pe­ri­ods. In par­tic­u­lar, there are few if any fi­nan­cial ve­hi­cles that can be re­li­ably ex­pected to per­sist for more than 100 years and stay com­mit­ted to their in­tended use, while also earn­ing good in­vest­ment re­turns. Figur­ing out how to set up and man­age such a fund seems to us like it might be very worth­while.

Founders Pledge—an or­ga­ni­za­tion that en­courages effec­tive giv­ing for en­trepreneurs—is cur­rently ex­plor­ing this idea and is ac­tively seek­ing in­put. It seems likely that only a few peo­ple will be able to be in­volved in a pro­ject like this, as it’s not clear there will be room for mul­ti­ple funds or a large staff. But for the right per­son we think this could be a great op­por­tu­nity. Espe­cially if you have a back­ground in fi­nance or rele­vant ar­eas of law, this might be a promis­ing path for you to ex­plore.

Ex­plore a po­ten­tially press­ing prob­lem area

There are many ne­glected global prob­lems that could turn out to be as or even more press­ing than those we cur­rently pri­ori­tise most highly. We’d be keen to see more peo­ple ex­plore them by ac­quiring rele­vant train­ing and a net­work of men­tors, and get­ting to know the rele­vant fields.

If the prob­lem area still seems po­ten­tially promis­ing once you’ve built up a back­ground, you could take on a pro­ject or try to build up the rele­vant fields, for in­stance by set­ting up a con­fer­ence or newslet­ter to help peo­ple work­ing in the area co­or­di­nate bet­ter.

If, af­ter in­ves­ti­gat­ing, work­ing on the is­sue doesn’t seem par­tic­u­larly high im­pact, then you’ve helped to elimi­nate an op­tion, sav­ing oth­ers time.

In ei­ther case we’d be keen to see write-ups of these ex­plo­ra­tions, for in­stance on this fo­rum.

We can’t re­ally recom­mend this as a pri­or­ity path be­cause it’s so amor­phous and un­cer­tain. It also gen­er­ally re­quires es­pe­cially high de­grees of en­trepreneuri­al­ism and cre­ativity, since you may get less sup­port in your work, es­pe­cially early on, and it’s challeng­ing to think of new pro­jects and re­search ideas that provide use­ful in­for­ma­tion about the promise of a less ex­plored area. How­ever, if you fit this pro­file (and es­pe­cially if you have ex­ist­ing in­ter­est in and knowl­edge of the prob­lem you want to ex­plore), this path could be an ex­cel­lent op­tion for you.

high impact careers