Consider a wider range of jobs, paths and problems if you want to improve the long-term future

I wrote this post for my per­sonal Face­book and it was well re­ceived, so I thought it could be use­ful to peo­ple here on the EA Fo­rum as well.

Colourful brain

My im­pres­sion is that many peo­ple whose top ca­reer goal is ‘im­prove the long-term fu­ture of hu­man­ity’ are overly fo­cused on work­ing at a hand­ful of ex­plic­itly EA/​longter­mist/​AI-re­lated or­gani­sa­tions.

Some of those pro­jects are great but it would be both crazy and im­pos­si­ble to try to cram thou­sands of peo­ple into them any time soon.

They’re also not the nat­u­ral place for most peo­ple to start their ca­reer, even if they might want to work at them later on.

The world is big, and op­por­tu­ni­ties to im­prove hu­man­ity’s long-term prospects are not likely to be con­cen­trated in just a hand­ful of places we’re already very fa­mil­iar with.

Folks want to work on these pro­jects mostly be­cause they are solid op­por­tu­ni­ties to do good, but where does the nar­row fo­cus on them come from? I’m not sure, but some drivers might in­clude:

  • They mostly pub­lish and pro­mote what they do, mak­ing them es­pe­cially visi­ble on­line.

  • It’s fun to work with col­leagues you already know, or who share your wor­ld­view.

  • They don’t re­quire peo­ple to pi­o­neer their own unique path, which can be in­timi­dat­ing and just out­right difficult.

  • They feel low-risk and le­gi­t­i­mate. Peo­ple you meet can eas­ily tell you’re do­ing some­thing they think is cool. And you might feel more se­cure that you’re likely do­ing some­thing use­ful or at least sen­si­ble.

  • 80,000 Hours and oth­ers have talked about them more in the past.

For a while we’ve been en­courag­ing read­ers/​listen­ers to broaden the op­tions they con­sider be­yond the im­me­di­ately ob­vi­ous op­tions as­so­ci­ated with the effec­tive al­tru­ism com­mu­nity. But I’m not always sure that mes­sage has cut through enough, or been enough to over­come the fac­tors above.

I worry the end re­sult is i) too lit­tle in­no­va­tion or in­de­pen­dent think­ing, ii) some peo­ple not find­ing im­pact­ful jobs as they keep ap­ply­ing for a tiny num­ber of po­si­tions they aren’t so likely to get or which aren’t even a good fit, and iii) peo­ple build­ing less ca­reer cap­i­tal than they oth­er­wise might have.

Ad­di­tional problems

First, to give read­ers some ideas, 80,000 Hours re­cently put up this list of prob­lems which might be as good to work in as the ‘clas­sics’ we’ve writ­ten on the most:

  • Mea­sures to re­duce the chance of ‘great power’ conflicts

  • Efforts to im­prove global governance

  • Vot­ing reform

  • Im­prov­ing in­di­vi­d­ual reasoning

  • Pioneer­ing new ways to provide global pub­lic goods

  • Re­search into surveillance

  • Shap­ing the de­vel­op­ment of atomic scale manufacturing

  • Broadly pro­mot­ing pos­i­tive values

  • Mea­sures to im­prove the re­silience of civilization

  • Re­duc­tion of s-risks

  • Re­search into whole brain emulation

  • Mea­sures to re­duce the risk of sta­ble totalitarianism

  • Safe­guard­ing liberal democracy

  • Re­search into hu­man enhancement

  • De­sign­ing recom­mender sys­tems at top tech firms

  • Space governance

  • In­vest­ing for the fu­ture.

The write-up on each is brief, but might be enough to get you started do­ing fur­ther re­search.

Ad­di­tional ca­reer paths

Se­cond, there’s a new list of other ca­reer paths we don’t know a tonne about or which are a bit vague, but we ex­pect at least a few read­ers should take on:

  • Be­come a his­to­rian fo­cus­ing on large so­cietal trends, in­flec­tion points, progress, or collapse

  • Be­come a spe­cial­ist on Rus­sia or India

  • Be­come an ex­pert in AI hardware

  • In­for­ma­tion security

  • Be­come a pub­lic intellectual

  • Journalism

  • Policy ca­reers that are promis­ing from a longter­mist perspective

  • Be re­search man­ager or a PA for some­one do­ing re­ally valuable work

  • Be­come an ex­pert on for­mal verification

  • Use your skills to meet a need in the effec­tive al­tru­ism community

  • Non­profit entrepreneurship

  • Non-tech­ni­cal roles in lead­ing AI labs

  • Create or man­age a long-term philan­thropic fund

There must be other things that should go on these lists — and some that should come off as well — but at least they’re a start.

Again the de­scrip­tion of each a brief, but are hope­fully a launch­ing pad for peo­ple to do more in­ves­ti­ga­tion.

(Credit goes to Ar­den Koehler for do­ing most of the work on the above.)

Ad­di­tional jobs

Third, I don’t know what frac­tion of peo­ple have no­ticed how many po­si­tions on our job board are at places they haven’t heard of or don’t know much about, and which have noth­ing to do with EA.

Some are great for di­rectly do­ing good, oth­ers are more about po­si­tion­ing you to do some­thing awe­some later. But any­way, right now there’s:

  • 131 on AI tech­ni­cal and policy work

  • 66 on biose­cu­rity and pan­demic preparedness

  • 11 on in­sti­tu­tional de­ci­sion-making

  • 95 on in­ter­na­tional coordination

  • 34 on nu­clear stuff

  • 37 on other ran­dom longter­mist-flavoured stuff

We’ve only got one per­son work­ing on the board at the mo­ment, so it’s scarcely likely we’ve ex­hausted ev­ery­thing that could be listed ei­ther.

If noth­ing there is your bag maybe you’d con­sider grad­u­ate study in econ, pub­lic policy, se­cu­rity stud­ies, stats, pub­lic health, biodefence, law, poli­ti­cal sci­ence, or what­ever.

Alter­na­tively, you could de­velop ex­per­tise on some as­pect of China, or get a job with pro­mo­tion pos­si­bil­ities in the civil ser­vice, etc, etc.

Which also re­minds me of this list of ~50 longter­mist-flavoured policy changes and re­search pro­jects which nat­u­rally lead to lots of idiosyn­cratic ca­reer and study ideas.

Any­way, I’m not say­ing if you can get a job at Deep­Mind or Open Philan­thropy that you shouldn’t take it — you prob­a­bly should — just that the world of work ob­vi­ously doesn’t start and end with be­ing a Re­search Scien­tist at Deep­Mind or a Grant-maker at Open Phil.

There’s ~4 billion jobs in the world and more that could ex­ist if the right per­son rocked up to fill them. So it’s crazy to limit our col­lec­tive hori­zons to, like, 5 at a time.

As I men­tion above, some of these paths can feel riskier and harder go­ing than just work­ing where your friends already are. So to help counter that, I sug­gest pay­ing a bit more re­spect to the courage or ini­ti­a­tive shown by those who choose to figure out their own unique path or oth­er­wise do some­thing differ­ent than those around them.


P.S. There’s also a bunch of prob­lems that some other peo­ple think are neat ways to im­prove our long-term tra­jec­tory about which I’m per­son­ally more skep­ti­cal — but maybe you agree with them not me:

  • More re­search into and im­ple­men­ta­tion of poli­cies for eco­nomic growth

  • Im­prov­ing sci­ence policy and infrastructure

  • Re­duc­ing mi­gra­tion restrictions

  • Re­search to rad­i­cally slow aging

  • Im­prov­ing in­sti­tu­tions to pro­mote development

  • Re­search into space set­tle­ment and terraforming

  • Shap­ing lie de­tec­tion technology

  • Find­ing ways to im­prove the welfare of wild animals