More EAs should consider “non-EA” jobs

Summary: I argue that working in certain non-neglected fields is undervalued as a career option for EA-aligned individuals and that working at EA-organizations is potentially overvalued. I argue that by bringing an EA-perspective to non-EA spaces, EA-aligned folks can potentially have a substantial impact.

I took a non-EA job back in December (partly because I was rejected from all of the EA jobs I applied to, haha) that is not in any of the priority fields, as conventionally understood; I work for the California state government in a civil service job. When I started, I was unsure how much impact I would be able to have in this position, but as I’ve reflected on my time with the state, I’ve come to believe that the ways of thinking that I’ve drawn from EA have allowed me to be much more effective in my job. In particular:

  • Attunement to the magnitude of impact of a particular action or policy

  • Attunement to the marginal value I’m contributing

  • Attunement to how I can make others more effective

  • Attunement to inefficiencies in systems and how fundamental changes to workflows or policies could save time or money

  • Belief in the value of using data to inform decision-making

As an entry-level worker, I haven’t exactly revolutionized how we’re doing work here. But even in my comparatively low-influence position, I do feel that I’ve been able to make an impact, in large part due to the EA-aligned paradigms that I bring to my work. My colleagues are smart and committed, but they often aren’t focused on the sorts of things I listed above. By bringing these ways of thinking into the work that we do, I believe I’ve been able to make a positive impact.

Beyond entry-level jobs such as mine, I believe that there are tons of opportunities for well-placed individuals to have a substantial impact by working in state government. The amounts of money being spent by state governments are tremendous: the 21-22 California budget is 262.5 billion dollars. Compare that to prominent EA organizations—GiveWell directed 172 million dollars in 2019, for example—or even high-powered philanthropy such as the Bill and Melinda Gates Foundation, which contributed 5.09 billion in 2019 (1). It goes without saying that when it comes to lives saved per dollar, EA orgs are spending their money more efficiently; however, I would argue that there is so much money being spent on state, local, and federal government programs that they’re worth considering as areas where bringing in EA perspectives can have a positive impact.

How that 262.5 billion is allocated and spent is shaped by a number of actors: members of the legislature and governor’s office; individuals in the state departments who decide which programs to request additional funding for; individuals in the budget offices and Department of Finance who decide which funding requests to reject or approve; and, of course, individuals throughout civil service who carry out the programs being funded with more or less efficiency. Throughout this process, there are individuals and departments that have substantial influence on where the money goes and how well it is spent. Thankfully, it’s typically groups of people who are making decisions, rather than lone individuals, but individuals can absolutely have substantial ripple effects. Having one person in the room who advocates for a more data-driven approach or who points to a neglected aspect of the issues that others hadn’t noted can shift the whole direction of a policy; having one analyst who develops a more efficient way of processing applications or conducting investigations, and then pushes for that to become policy, can make things more efficient for huge numbers of people. These are just a few examples.

I think that conventional EA approaches to career-choice miss these sorts of opportunities because of an overemphasis on cause area. Let’s take a look at the 80,000 Hours perspective on this topic. I recognize that 80,000 Hours is not representative of the beliefs of every EA on career-alignment, but it is the main institutional voice when it comes to this topic.

80,000 Hours argues that your choice of cause area is the single most important factor to consider when choosing your career (2). Their “Start here” page uses the following chart to support their argument that people should go into neglected fields:

Their point is that certain issues, such as global biorisk research, are neglected. I agree with that assessment, and I agree that EAs should consider going into that biorisk research if it’s right for them. But when I look at this chart, my main takeaway is that there’s a ton of money being spent on welfare, and that working to make sure that money is spent as efficiently as possible could have a huge impact. Applying the framework of neglectedness too broadly risks concealing opportunities for change.

I also think that an overemphasis on the importance of cause area can unduly minimize considerations related to marginal value. In my experience, EAs can be weirdly selective about when they employ arguments about marginal value when it comes to career choice. I’ve often heard EAs argue that it’s not particularly effective to become a doctor. The argument goes that lots of smart, talented people want to be doctors, so you’re unlikely to provide marginal value as a doctor. Given my impression of the EA job market in many (but not all) areas right now, this seems to be a fair way of approaching many jobs at explicitly-EA orgs. There are tons of smart, talented people vying for EA jobs; unless you have a particularly unique skillset or perspective, it’s not clear to me that any given individual is likely to provide substantial marginal value in an EA job (3, 4). But if you’re an EA-aligned person taking a job in a non-EA org, you’re instantly providing the marginal value of EA-aligned paradigms and approaches. This both allows you to do more good and creates the opportunity to spread EA-aligned ways of thinking.

To be absolutely clear, I am not arguing that folks shouldn’t apply for or work in explicitly-EA organizations. These orgs are doing a tremendous amount of good, made possible by the talented and committed folks who work for them. What I am arguing is that considerations of marginal value make working for an explicitly-EA org less valuable than conventional wisdom currently states, and working for non-EA-orgs more valuable than our current models might suggest.

Where does that leave us? My hope is that more folks will consider careers in fields that are not-traditionally associated with EA, while continuing to consider conventionally EA ones as well. I’m really excited by the possibility of bringing EA ideas and ways of thinking into more spaces, and I think that taking positions in non-EA spaces is one way of encouraging this. I know that some folks feel like there’s a natural limit to how many people will want to join EA (whether because of the demanding ethics that many associate EA with or some of the more esoteric cause areas under the EA umbrella), but I think that it’s also worth considering that there is a subset of people who might not be willing to fully identify with the movement but who would be willing to adopt some of its ways of thinking, especially if they see firsthand how it can help them be more effective in their work.

I also think that a more positive attitude toward “non-EA” jobs would be healthy for the community. I’ve often heard people say (particularly on r/​effectivealtruism and at in-person gatherings) that they feel like the advice offered by 80,000 Hours is only “for” a small subset of high-achieving individuals. I don’t think that that’s totally fair, but I think it is true to an extent. Optimization has always been a big part of EA: “How can I do the most good?” Even as we keep this question central, I think it’s important to also ask gentler variations of this question such as: “How can I do the most good with where I am now?” and “How can I do the most good given the choices I’m willing to make?” Making a career change isn’t always what’s right for someone, but that doesn’t mean that they can’t bring EA concepts into non-traditionally EA jobs, whether in a policy job (where I would argue you may be able to do comparable amounts of good as in an explicitly-EA-org) or in any other position.

There are always tradeoffs. Working at an explicitly-EA aligned org has both pros and cons: it typically means that you’re engaging in work that is likely highly effective and you’re also getting to be in a community of people with shared values, which may strengthen your own beliefs and incubate new EA ideas, but it’s possible that in a counterfactual scenario, another person may have comparable impact. There are likewise both pros and cons of working for a non-EA org: you’re almost certainly bringing much more of an EA perspective than anyone else would in your position, so you may be able to have substantial marginal impact, and you’re helping to spread EA ideas to new networks, but your day to day work is likely less effective than it would be at an EA org and you may not have a sense of sharing values with your work community, which may dilute your commitment or lead to a less engaging experience for you.

There are always tradeoffs, and it’s up to each person to figure out which position is right for them at any given time. Career choice is a complex question that’s worthy of sustained attention, and I think it’s fantastic that orgs such as 80,000 Hours have championed treating it with such seriousness. But I worry that the way that 80,000 Hours discusses non-neglected cause areas may lead to some EAs dismissing possibly high-impact careers; and, furthermore, I think it’s important that we create space for asking the question “how can I do the most good given the choices I’m willing to make?” in broader career-choice contexts.

(1) 21-22 state budget: http://​​www.ebudget.ca.gov/​​budget/​​2021-22EN/​​#/​​Home; Givewell: https://​​www.givewell.org/​​about/​​impact; Bill and Melinda Gates Foundation: https://​​www.gatesfoundation.org/​​about/​​financials/​​annual-reports/​​annual-report-2019. I used 2019 numbers for the latter two because they were the most recent numbers available.

(2) https://​​80000hours.org/​​make-a-difference-with-your-career/​​#which-problems-should-you-work-on

(3) My belief that there are large numbers of highly talented individuals applying to EA jobs in certain areas (particularly AI research, catastrophic risk, and mainstream EA orgs; my rough sense is that jobs are more attainable in orgs that do animal-related work) is based on things I’ve seen on the EA forum and various EA-related FB pages, what I’ve heard from people at a few in-person EA gatherings, and my own experiences. I may be wrong, and I am very open to alternative perspectives, but this is my sense of it.

This topic becomes yet more complex because the question then becomes “how do I know if I would add substantial marginal value in this position?”, which is a challenging question to answer. One approach is to apply to lots of EA orgs and trust them to assess that, which I’ve repeatedly heard recommended; however, this does require investing a decent amount of time into these applications with uncertain reward. I found EA-jobs to be much more time-intensive to apply for than non-EA jobs, on the whole, and dedicating that time was a non-trivial ask.

(4) My belief that any given person may not provide substantial marginal value in a particular role is key to my argument, and definitely a live area of debate. Here’s one counterargument from 80,000 Hours, for example: https://​​80000hours.org/​​2019/​​05/​​why-do-organisations-say-recent-hires-are-worth-so-much/​​#2-there-might-be-large-differences-in-the-productivity-of-hires-in-a-specific-role I’m not an expert in this area, so I am very open to further counterarguments on this point!

Note: I realize that I’m bringing several different ideas into play here; I considered splitting this into separate posts but wasn’t able to make that work. Eager to hear folks’ thoughts on any of the ideas discussed above (reflecting on what it means to bring EA principles into a non-EA job; arguing for the value of policy jobs; arguing against an overemphasis on cause area and neglectedness; arguing for the value of bringing EA perspectives into non-EA spaces; arguing that EA jobs may be overvalued due to marginal value considerations; discussing spreading EA ideas through career choice).

Big thank you to Nicholas Goldowsky-Dill for his help thinking through these concepts!