SHOW: A framework for shaping your talent for direct work

By Ryan Carey, cowritten with Tegan Mccaslin (this post represents our own opinions, and not those of our current or former employers)

TLDR: If your career as an EA has stalled, you’ll eventually break through if you do one (or more) of four things: gaining skills outside the EA community, assisting the work of more senior EAs, finding valuable projects that EAs aren’t willing to do, or finding projects that no one is doing yet.


Let’s say you’ve applied to, and been rejected, from several jobs in high-impact areas over the past year (a situation that is becoming more common as the size of the movement grows). At first you thought you were just unlucky, but it looks increasingly likely that your current skillset just isn’t competitive for the positions you’re applying to. So what’s your next move?

I propose that there are four good paths open to you now:

  1. Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.

  2. Get Humble: Amplify others’ impact from a more junior role.

  3. Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.

  4. Get Weird: Find things no one is doing.

I’ve used or strongly considered all of these strategies myself, so before I outline each in more depth I’ll discuss the role they’ve played in my career. (And I encourage readers who resonate with SHOW to do the same in the comments!) Currently I do AI safety research for FHI. But when I first came to the EA community 5 years ago, my training was as a doctor, not as a researcher. So when I had my first professional EA experience, as an intern for 80,000 Hours, my work was far from extraordinary. As the situation stood, I was told that I would probably be more useful as a funder than as a researcher.

I figured that in the longer term, my greatest chance at having a substantial impact lay in my potential as a researcher, but that I would have to improve my maths and programming skills to realize that. I got skilled by pursuing a master’s degree in bioinformatics, thinking I might contribute to work on genomics or brain emulations.

But when I graduated, I realized I still wouldn’t be able to lead research on these topics; I didn’t yet have substantial experience with the research process. So I got humble and reached out to MIRI to see if they could use a research assistant. There, I worked under Jessica Taylor for a year, until the project I was involved in wound down. After that I reached out to several places to continue doing AI safety work, and was accepted as an intern and ultimately a full-time researcher at FHI.

Right now, I feel like I have plenty of good AI safety projects to work on. But will the ideas keep flowing? If not, that’s totally fine: I can get outside and work on security and policy questions that EA hasn’t yet devoted much time to, or I could dive into weird problems like brain emulation or human enhancement that few people anywhere are working on.

The fact is that EA is made up in some large part by a bunch of talented generalists trying to squeeze into tiny fields, with very little supervision to go around. For most people, trying to do direct work will mean that you repeatedly hit career walls like I did, and there’s no shame in that. If anything, the personal risk you incur through this process is honorable and commendable. Hopefully, the SHOW framework will just help you go about hitting walls a little more efficaciously.

1. Get Skilled (outside of EA)

This is common advice for a reason: it’s probably the safest and most accessible path for the median EA. When you consider that skills are generally learned more easily with supervision, and that most skills are transferable between EA and non-EA contexts, getting training in the form of a graduate degree or a relevant job is an excellent choice. This is especially true if you broadly know what kind of work you want to do but don’t have a very specific vision of the particulars. Even if you already have skills which seem sufficient to you, they might not be well-suited for the roles you’re interested in, in which case retraining is probably in order.

However, you should make sure that whatever training scheme you have in mind will actually prepare you for what you want to do, or will otherwise give you good option value. Some things which may sound relevant won’t be, and some things which don’t will be. Making sure your goals are clear and sensible is an important step to making this strategy work.

Although in theory it could cut down on your option value, you might also want to err on the side of getting specialized skills. This takes you out of the very large reference class of “talented EA generalist”, where you have to be really extraordinary to be noticed, into a less competitive pool where you have a better shot at making a substantial contribution.

80k has reviewed jobs in numerous high-impact career paths in detail, and makes specific recommendations about training in each review—check the guide out if you haven’t already.

2. Get Humble (by helping more senior EAs)

If you can find someone who shares your goals and intellectual interests, and who actually thinks you could be of use to them, that opportunity is golden. For researchers, this probably means being a research assistant, while for operations it might mean volunteering to help an organizer run events. In some cases, offering to PA for a highly productive EA could be an enormous help, since the search for competent PAs on one’s own is quite difficult.

Despite the fact that they’re a great choice both for skill-building and direct impact, these types of opportunities are undervalued for a few reasons. Firstly, people often underestimate the share of contributions that top performers in a field are responsible for. Acting as a “force multiplier” for one of these top people will often be higher impact than contributing your own independent work. You might also have to take a hit to your ego to subordinate your work to someone else’s success, especially if they are relatively junior. (It’s worth noting, though, that other people will usually be quite impressed to learn that you’ve worked under another successful EA.)

When I worked under Jessica Taylor at MIRI, we were both quite young and technically had the same level of credentials, and she lacked management experience. But while there, in addition to assisting high impact work, I got to learn a huge amount about writing papers, having good research intuitions and learning relevant mathematics. I improved much faster over that period than when I was studying in an ML lab, which in turn was much faster than when I was taking classes or self-studying. Organizations like MIRI and FHI have hundreds of applications per year for researcher roles, whereas the number of people per year who ask to join as research assistants are something like thirty times lower. Given the opportunities for skill development, freedom and publication that research assistants have, I think EA researchers are probably making a big mistake by so-rarely asking for these sorts of positions.

There are quite a few caveats to this strategy. First of all, the capacity for this path to absorb people is limited by the number of experienced people willing to take on assistants, so this strategy won’t be a good fit for as many people as get skilled might be. There are many reasons a person might decline an offer of assistance. For one, it is implicitly requesting some degree of management, which not everyone will have the bandwidth for. Even if the person you approach does have the bandwidth to manage an assistant, they may need to see a lot of evidence to convince them that you would add value. And in any case they may not be in a position to offer you compensation.

Nonetheless, this path seems underexploited, and should be a fantastic stepping-stone for those who are able to pull it off.

3. Get Outside (of the conventional EA approaches)

Just like any social community, EA has incentives that aren’t perfectly aligned with optimal resource allocation. If you understand those incentives well, you can identify those areas most likely to be neglected by EAs. And if it were important to have an EA perspective represented in an area, you might want to pursue it even if there’s a lot of non-EA work being done there already.

An area might be neglected by EAs because it’s devalued by EA culture/​politics. EAs are mostly wealthy, urban and center-left, and there may be causes which would be apparent to individuals from other backgrounds, but are completely off the radar of mainstream EAs. And some paths are avoided largely because they offend EA cultural sensibilities, not because they lack impactful opportunities. For example, since EA and rationality lean toward startups, non-hierarchical structures and counterculturalism, few EAs engage in security and defense. Some EAs who’ve bucked this trend have found quite a bit of success, like Jason Matheny, who served as the director of IARPA. From this example, we can see that the highest impact careers are often not in EA organizations at all. If you can succeed in one of these neglected career paths, your impact could ultimately far outshine the impact you could have had by working at a “traditional” EA org.

Sometimes, an activity is collectively beneficial but individually costly. If you write an article that includes criticism of community organizations and institutions, this may be an extremely valuable service, but it nonetheless carries some risk of social punishment. Examples of articles reviewing institutions include the AI Alignment Literature Review and the Review of Basic Leverage Research Facts.

4. Get Weird (by finding your own bizarre niche)

Right now, professional EA is slow-growing in terms of depth: because of the way management capacity is bottlenecked, it’s often difficult to get value from adding marginal generalist hires to a project. But there are no such limits on breadth, and if you can find something to do that less than 10 people in the world are currently doing, you can chip away at the nearly infinite list of “things someone should do but no one’s seriously considered yet”.

Ten years ago, AI safety was on that list. The few people who were thinking about it in the early days are often now heading organizations or pioneering ambitious research programs. It’s definitely not the case that all causes on that list will grow to the magnitude that AI safety has, but some will turn out to be important, and many will be valuable in a second order way.

Few people are working on impact certificates, voting method reform, whole brain emulation, alternative foods, atomically precise manufacturing, or global catastrophic biorisks. None of those are slam-dunk causes. But there’s a lot to be said for the value of information, and many suboptimal causes will be adjacent to genuinely promising ones. If you have a specialized background or interests that position you well to pursue the unusual (for instance, if you have two distinct areas of expertise that aren’t often combined), this strategy is made for you.

Of the four strategies, getting weird is probably the riskiest, and the one fewest people are suited for. Projects chosen at random from the list are overwhelmingly likely to be of no value whatsoever, so you’d have to rely on your (probably untested) ability to choose well in the face of little evidence. Worse, there are major “unilateralist’s curse” concerns for projects that seem promising but haven’t been pursued. These dangers aren’t so great that this strategy can’t be recommended to anyone, and it’s probably worth most people’s time to come up with a short list of speculative projects they’d be suited to working on. But readers should be advised to proceed with caution and seek feedback on any harebrained schemes.

Putting it all together

The four strategies above aren’t mutually exclusive, and in fact combining them where you can (and where it makes sense) may yield better results than using any one strategy on its own. I think with enough work, SHOWing can eventually pay off for most people, but it may take a while to get there. I gave a clean little story about my own career trajectory above, but be assured that my path was also littered with false starts, rejections and failures.

If I were wiping the slate clean and starting my career over now, I might go through each of the four strategies and enumerate all the possible opportunities open to me on each path. I could then rank these opportunities by the probability I succeed in pursuing them, the amount success would move me closer to my ultimate career goals, and the amount of direct impact a success would represent. I’d also want to consider what kind of competition I would have for each opportunity. Basically, SHOW can help with the initial step of generating a moderately-sized list of strong options, although the impact potential of these options still needs to be analyzed.

The final thing I want to share is the example of my current favorite musician, Brad Mehldau. He’s considered one of the top improvisational pianists, at 48. But he took a long road to the top. As a child he developed skills in classical piano, an ideal way to practice left-hand skills and multiple harmonies. He moved to New York to study jazz, and got humble, touring as a side-man for a saxophonist for 18 months. His first two albums, at age 24-25, consisted mostly of jazz standards, and were criticised as sounding too much like a certain legendary pianist in the jazz tradition. But with experience, he grew a more distinctive voice. Nowadays he plays a unique style of jazz that steps outside of jazz’s usual confines to incorporate pop covers and elements of classical music. He has one delightfully weird album where each song copies one of Bach’s. Many people like him manage to make good career decisions without doing expected value calculations at each step, instead choosing to learn important skills, surround themselves with brilliant people, and eventually find a niche where they can fulfill their potential. When our best laid plans fail, we can do worse than falling back on these heuristics.

Thanks to Howie Lempel for feedback on this post, though mistakes are ours alone.