Consider a wider range of jobs, paths and problems if you want to improve the long-term future

I wrote this post for my personal Facebook and it was well received, so I thought it could be useful to people here on the EA Forum as well.

Colourful brain

My impression is that many people whose top career goal is ‘improve the long-term future of humanity’ are overly focused on working at a handful of explicitly EA/​longtermist/​AI-related organisations.

Some of those projects are great but it would be both crazy and impossible to try to cram thousands of people into them any time soon.

They’re also not the natural place for most people to start their career, even if they might want to work at them later on.

The world is big, and opportunities to improve humanity’s long-term prospects are not likely to be concentrated in just a handful of places we’re already very familiar with.

Folks want to work on these projects mostly because they are solid opportunities to do good, but where does the narrow focus on them come from? I’m not sure, but some drivers might include:

  • They mostly publish and promote what they do, making them especially visible online.

  • It’s fun to work with colleagues you already know, or who share your worldview.

  • They don’t require people to pioneer their own unique path, which can be intimidating and just outright difficult.

  • They feel low-risk and legitimate. People you meet can easily tell you’re doing something they think is cool. And you might feel more secure that you’re likely doing something useful or at least sensible.

  • 80,000 Hours and others have talked about them more in the past.

For a while we’ve been encouraging readers/​listeners to broaden the options they consider beyond the immediately obvious options associated with the effective altruism community. But I’m not always sure that message has cut through enough, or been enough to overcome the factors above.

I worry the end result is i) too little innovation or independent thinking, ii) some people not finding impactful jobs as they keep applying for a tiny number of positions they aren’t so likely to get or which aren’t even a good fit, and iii) people building less career capital than they otherwise might have.

Additional problems

First, to give readers some ideas, 80,000 Hours recently put up this list of problems which might be as good to work in as the ‘classics’ we’ve written on the most:

  • Measures to reduce the chance of ‘great power’ conflicts

  • Efforts to improve global governance

  • Voting reform

  • Improving individual reasoning

  • Pioneering new ways to provide global public goods

  • Research into surveillance

  • Shaping the development of atomic scale manufacturing

  • Broadly promoting positive values

  • Measures to improve the resilience of civilization

  • Reduction of s-risks

  • Research into whole brain emulation

  • Measures to reduce the risk of stable totalitarianism

  • Safeguarding liberal democracy

  • Research into human enhancement

  • Designing recommender systems at top tech firms

  • Space governance

  • Investing for the future.

The write-up on each is brief, but might be enough to get you started doing further research.

Additional career paths

Second, there’s a new list of other career paths we don’t know a tonne about or which are a bit vague, but we expect at least a few readers should take on:

  • Become a historian focusing on large societal trends, inflection points, progress, or collapse

  • Become a specialist on Russia or India

  • Become an expert in AI hardware

  • Information security

  • Become a public intellectual

  • Journalism

  • Policy careers that are promising from a longtermist perspective

  • Be research manager or a PA for someone doing really valuable work

  • Become an expert on formal verification

  • Use your skills to meet a need in the effective altruism community

  • Nonprofit entrepreneurship

  • Non-technical roles in leading AI labs

  • Create or manage a long-term philanthropic fund

There must be other things that should go on these lists — and some that should come off as well — but at least they’re a start.

Again the description of each a brief, but are hopefully a launching pad for people to do more investigation.

(Credit goes to Arden Koehler for doing most of the work on the above.)

Additional jobs

Third, I don’t know what fraction of people have noticed how many positions on our job board are at places they haven’t heard of or don’t know much about, and which have nothing to do with EA.

Some are great for directly doing good, others are more about positioning you to do something awesome later. But anyway, right now there’s:

  • 131 on AI technical and policy work

  • 66 on biosecurity and pandemic preparedness

  • 11 on institutional decision-making

  • 95 on international coordination

  • 34 on nuclear stuff

  • 37 on other random longtermist-flavoured stuff

We’ve only got one person working on the board at the moment, so it’s scarcely likely we’ve exhausted everything that could be listed either.

If nothing there is your bag maybe you’d consider graduate study in econ, public policy, security studies, stats, public health, biodefence, law, political science, or whatever.

Alternatively, you could develop expertise on some aspect of China, or get a job with promotion possibilities in the civil service, etc, etc.

Which also reminds me of this list of ~50 longtermist-flavoured policy changes and research projects which naturally lead to lots of idiosyncratic career and study ideas.

Anyway, I’m not saying if you can get a job at DeepMind or Open Philanthropy that you shouldn’t take it — you probably should — just that the world of work obviously doesn’t start and end with being a Research Scientist at DeepMind or a Grant-maker at Open Phil.

There’s ~4 billion jobs in the world and more that could exist if the right person rocked up to fill them. So it’s crazy to limit our collective horizons to, like, 5 at a time.

As I mention above, some of these paths can feel riskier and harder going than just working where your friends already are. So to help counter that, I suggest paying a bit more respect to the courage or initiative shown by those who choose to figure out their own unique path or otherwise do something different than those around them.


P.S. There’s also a bunch of problems that some other people think are neat ways to improve our long-term trajectory about which I’m personally more skeptical — but maybe you agree with them not me:

  • More research into and implementation of policies for economic growth

  • Improving science policy and infrastructure

  • Reducing migration restrictions

  • Research to radically slow aging

  • Improving institutions to promote development

  • Research into space settlement and terraforming

  • Shaping lie detection technology

  • Finding ways to improve the welfare of wild animals