SHOW: A framework for shaping your talent for direct work
By Ryan Carey, cowritten with Tegan Mccaslin (this post represents our own opinions, and not those of our current or former employers)
TLDR: If your career as an EA has stalled, you’ll eventually break through if you do one (or more) of four things: gaining skills outside the EA community, assisting the work of more senior EAs, finding valuable projects that EAs aren’t willing to do, or finding projects that no one is doing yet.
Let’s say you’ve applied to, and been rejected, from several jobs in high-impact areas over the past year (a situation that is becoming more common as the size of the movement grows). At first you thought you were just unlucky, but it looks increasingly likely that your current skillset just isn’t competitive for the positions you’re applying to. So what’s your next move?
I propose that there are four good paths open to you now:
Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.
Get Humble: Amplify others’ impact from a more junior role.
Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.
Get Weird: Find things no one is doing.
I’ve used or strongly considered all of these strategies myself, so before I outline each in more depth I’ll discuss the role they’ve played in my career. (And I encourage readers who resonate with SHOW to do the same in the comments!) Currently I do AI safety research for FHI. But when I first came to the EA community 5 years ago, my training was as a doctor, not as a researcher. So when I had my first professional EA experience, as an intern for 80,000 Hours, my work was far from extraordinary. As the situation stood, I was told that I would probably be more useful as a funder than as a researcher.
I figured that in the longer term, my greatest chance at having a substantial impact lay in my potential as a researcher, but that I would have to improve my maths and programming skills to realize that. I got skilled by pursuing a master’s degree in bioinformatics, thinking I might contribute to work on genomics or brain emulations.
But when I graduated, I realized I still wouldn’t be able to lead research on these topics; I didn’t yet have substantial experience with the research process. So I got humble and reached out to MIRI to see if they could use a research assistant. There, I worked under Jessica Taylor for a year, until the project I was involved in wound down. After that I reached out to several places to continue doing AI safety work, and was accepted as an intern and ultimately a full-time researcher at FHI.
Right now, I feel like I have plenty of good AI safety projects to work on. But will the ideas keep flowing? If not, that’s totally fine: I can get outside and work on security and policy questions that EA hasn’t yet devoted much time to, or I could dive into weird problems like brain emulation or human enhancement that few people anywhere are working on.
The fact is that EA is made up in some large part by a bunch of talented generalists trying to squeeze into tiny fields, with very little supervision to go around. For most people, trying to do direct work will mean that you repeatedly hit career walls like I did, and there’s no shame in that. If anything, the personal risk you incur through this process is honorable and commendable. Hopefully, the SHOW framework will just help you go about hitting walls a little more efficaciously.
1. Get Skilled (outside of EA)
This is common advice for a reason: it’s probably the safest and most accessible path for the median EA. When you consider that skills are generally learned more easily with supervision, and that most skills are transferable between EA and non-EA contexts, getting training in the form of a graduate degree or a relevant job is an excellent choice. This is especially true if you broadly know what kind of work you want to do but don’t have a very specific vision of the particulars. Even if you already have skills which seem sufficient to you, they might not be well-suited for the roles you’re interested in, in which case retraining is probably in order.
However, you should make sure that whatever training scheme you have in mind will actually prepare you for what you want to do, or will otherwise give you good option value. Some things which may sound relevant won’t be, and some things which don’t will be. Making sure your goals are clear and sensible is an important step to making this strategy work.
Although in theory it could cut down on your option value, you might also want to err on the side of getting specialized skills. This takes you out of the very large reference class of “talented EA generalist”, where you have to be really extraordinary to be noticed, into a less competitive pool where you have a better shot at making a substantial contribution.
80k has reviewed jobs in numerous high-impact career paths in detail, and makes specific recommendations about training in each review—check the guide out if you haven’t already.
2. Get Humble (by helping more senior EAs)
If you can find someone who shares your goals and intellectual interests, and who actually thinks you could be of use to them, that opportunity is golden. For researchers, this probably means being a research assistant, while for operations it might mean volunteering to help an organizer run events. In some cases, offering to PA for a highly productive EA could be an enormous help, since the search for competent PAs on one’s own is quite difficult.
Despite the fact that they’re a great choice both for skill-building and direct impact, these types of opportunities are undervalued for a few reasons. Firstly, people often underestimate the share of contributions that top performers in a field are responsible for. Acting as a “force multiplier” for one of these top people will often be higher impact than contributing your own independent work. You might also have to take a hit to your ego to subordinate your work to someone else’s success, especially if they are relatively junior. (It’s worth noting, though, that other people will usually be quite impressed to learn that you’ve worked under another successful EA.)
When I worked under Jessica Taylor at MIRI, we were both quite young and technically had the same level of credentials, and she lacked management experience. But while there, in addition to assisting high impact work, I got to learn a huge amount about writing papers, having good research intuitions and learning relevant mathematics. I improved much faster over that period than when I was studying in an ML lab, which in turn was much faster than when I was taking classes or self-studying. Organizations like MIRI and FHI have hundreds of applications per year for researcher roles, whereas the number of people per year who ask to join as research assistants are something like thirty times lower. Given the opportunities for skill development, freedom and publication that research assistants have, I think EA researchers are probably making a big mistake by so-rarely asking for these sorts of positions.
There are quite a few caveats to this strategy. First of all, the capacity for this path to absorb people is limited by the number of experienced people willing to take on assistants, so this strategy won’t be a good fit for as many people as get skilled might be. There are many reasons a person might decline an offer of assistance. For one, it is implicitly requesting some degree of management, which not everyone will have the bandwidth for. Even if the person you approach does have the bandwidth to manage an assistant, they may need to see a lot of evidence to convince them that you would add value. And in any case they may not be in a position to offer you compensation.
Nonetheless, this path seems underexploited, and should be a fantastic stepping-stone for those who are able to pull it off.
3. Get Outside (of the conventional EA approaches)
Just like any social community, EA has incentives that aren’t perfectly aligned with optimal resource allocation. If you understand those incentives well, you can identify those areas most likely to be neglected by EAs. And if it were important to have an EA perspective represented in an area, you might want to pursue it even if there’s a lot of non-EA work being done there already.
An area might be neglected by EAs because it’s devalued by EA culture/politics. EAs are mostly wealthy, urban and center-left, and there may be causes which would be apparent to individuals from other backgrounds, but are completely off the radar of mainstream EAs. And some paths are avoided largely because they offend EA cultural sensibilities, not because they lack impactful opportunities. For example, since EA and rationality lean toward startups, non-hierarchical structures and counterculturalism, few EAs engage in security and defense. Some EAs who’ve bucked this trend have found quite a bit of success, like Jason Matheny, who served as the director of IARPA. From this example, we can see that the highest impact careers are often not in EA organizations at all. If you can succeed in one of these neglected career paths, your impact could ultimately far outshine the impact you could have had by working at a “traditional” EA org.
Sometimes, an activity is collectively beneficial but individually costly. If you write an article that includes criticism of community organizations and institutions, this may be an extremely valuable service, but it nonetheless carries some risk of social punishment. Examples of articles reviewing institutions include the AI Alignment Literature Review and the Review of Basic Leverage Research Facts.
4. Get Weird (by finding your own bizarre niche)
Right now, professional EA is slow-growing in terms of depth: because of the way management capacity is bottlenecked, it’s often difficult to get value from adding marginal generalist hires to a project. But there are no such limits on breadth, and if you can find something to do that less than 10 people in the world are currently doing, you can chip away at the nearly infinite list of “things someone should do but no one’s seriously considered yet”.
Ten years ago, AI safety was on that list. The few people who were thinking about it in the early days are often now heading organizations or pioneering ambitious research programs. It’s definitely not the case that all causes on that list will grow to the magnitude that AI safety has, but some will turn out to be important, and many will be valuable in a second order way.
Few people are working on impact certificates, voting method reform, whole brain emulation, alternative foods, atomically precise manufacturing, or global catastrophic biorisks. None of those are slam-dunk causes. But there’s a lot to be said for the value of information, and many suboptimal causes will be adjacent to genuinely promising ones. If you have a specialized background or interests that position you well to pursue the unusual (for instance, if you have two distinct areas of expertise that aren’t often combined), this strategy is made for you.
Of the four strategies, getting weird is probably the riskiest, and the one fewest people are suited for. Projects chosen at random from the list are overwhelmingly likely to be of no value whatsoever, so you’d have to rely on your (probably untested) ability to choose well in the face of little evidence. Worse, there are major “unilateralist’s curse” concerns for projects that seem promising but haven’t been pursued. These dangers aren’t so great that this strategy can’t be recommended to anyone, and it’s probably worth most people’s time to come up with a short list of speculative projects they’d be suited to working on. But readers should be advised to proceed with caution and seek feedback on any harebrained schemes.
Putting it all together
The four strategies above aren’t mutually exclusive, and in fact combining them where you can (and where it makes sense) may yield better results than using any one strategy on its own. I think with enough work, SHOWing can eventually pay off for most people, but it may take a while to get there. I gave a clean little story about my own career trajectory above, but be assured that my path was also littered with false starts, rejections and failures.
If I were wiping the slate clean and starting my career over now, I might go through each of the four strategies and enumerate all the possible opportunities open to me on each path. I could then rank these opportunities by the probability I succeed in pursuing them, the amount success would move me closer to my ultimate career goals, and the amount of direct impact a success would represent. I’d also want to consider what kind of competition I would have for each opportunity. Basically, SHOW can help with the initial step of generating a moderately-sized list of strong options, although the impact potential of these options still needs to be analyzed.
The final thing I want to share is the example of my current favorite musician, Brad Mehldau. He’s considered one of the top improvisational pianists, at 48. But he took a long road to the top. As a child he developed skills in classical piano, an ideal way to practice left-hand skills and multiple harmonies. He moved to New York to study jazz, and got humble, touring as a side-man for a saxophonist for 18 months. His first two albums, at age 24-25, consisted mostly of jazz standards, and were criticised as sounding too much like a certain legendary pianist in the jazz tradition. But with experience, he grew a more distinctive voice. Nowadays he plays a unique style of jazz that steps outside of jazz’s usual confines to incorporate pop covers and elements of classical music. He has one delightfully weird album where each song copies one of Bach’s. Many people like him manage to make good career decisions without doing expected value calculations at each step, instead choosing to learn important skills, surround themselves with brilliant people, and eventually find a niche where they can fulfill their potential. When our best laid plans fail, we can do worse than falling back on these heuristics.
Thanks to Howie Lempel for feedback on this post, though mistakes are ours alone.
- 12 Jul 2022 12:18 UTC; 241 points) 's comment on EA for dumb people? by (
- The availability bias in job hunting by 30 Apr 2022 14:53 UTC; 105 points) (
- Thoughts on 80,000 Hours’ research that might help with job-search frustrations by 16 Apr 2019 18:51 UTC; 99 points) (
- Plan Your Career on Paper by 23 Sep 2021 15:04 UTC; 74 points) (
- The Case for the EA Hotel by 31 Mar 2019 12:34 UTC; 73 points) (
- When in doubt, apply* by 23 Jan 2022 19:07 UTC; 66 points) (
- Forum update: New features (November 2019) by 9 Nov 2019 1:00 UTC; 57 points) (
- The Case for The EA Hotel by 31 Mar 2019 12:31 UTC; 57 points) (LessWrong;
- EA jobs provide scarce non-monetary goods by 20 Mar 2019 20:56 UTC; 55 points) (
- What skills would you like 1-5 EAs to develop? by 1 Mar 2019 11:01 UTC; 48 points) (
- EA Forum Prize: Winners for March 2019 by 7 May 2019 1:36 UTC; 45 points) (
- Annotated List of EA Career Advice Resources by 13 Jul 2020 6:12 UTC; 43 points) (
- 4 Jan 2021 11:02 UTC; 38 points) 's comment on Can I have impact if I’m average? by (
- A guide to improving your odds at getting a job in EA by 19 Mar 2019 13:11 UTC; 37 points) (
- Career Suggestion: Earning to Skill by 14 Apr 2022 0:46 UTC; 27 points) (
- Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc. by 11 Sep 2021 13:52 UTC; 23 points) (
- 15 May 2021 13:43 UTC; 22 points) 's comment on EA is a Career Endpoint by (
- 20 Mar 2022 3:05 UTC; 12 points) 's comment on 32 EA Forum Posts about Careers and Jobs (2020-2022) by (
- 1 Jun 2023 23:24 UTC; 8 points) 's comment on An Earn to Learn Pledge by (
- 9 Sep 2023 15:00 UTC; 7 points) 's comment on Career Conversations Week on the Forum (8-15 September) by (
- 16 Dec 2020 4:45 UTC; 5 points) 's comment on Ask Rethink Priorities Anything (AMA) by (
- 26 Mar 2021 6:35 UTC; 4 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 22 May 2020 3:09 UTC; 3 points) 's comment on SHOW: A framework for shaping your talent for direct work by (
- 19 Jan 2022 22:07 UTC; 0 points) 's comment on The Drowning Child and the Expanding Circle by (
This is great. One thing I’d add is ‘Demonstrate’. (Or dare I say.… Show.)
If you think your skills are better than people can currently measure with confidence, you need to find a way to credibly signal how capable you are, while demanding as little time as possible from senior people in the process.
You can do that in a lower level role, or by pulling off some impressive, scrutable and visible project. Or getting a more classic credential. Maybe other things as well.
One reason so many prominent EAs have been writers in the past is not only that it’s a very broadly useful skill. It’s also a skill which is unusually public and easy for others to evaluate you on. It also gives you a chance to demonstrate your general reasoning ability, which is one of the most widely valued characteristics.
My 2 cents is a shift in mindset. It’s related to 3.
In my experience, the elephant in the brain is that most of us, most of the time, are still optimizing for approval 90% and value 10%. If you manage to snap out of that, you’ll suddenly see that there is a lot of unacknowledged value out there.
Not because anyone’s intentionally ignoring things, but because the people and organisations that have the position to acknowledge value are busy, and imperfect. They haven’t thought of everything. They don’t have everything in control. They’re not your superiors. The only difference between them and you is that they realized that there’s no fire alarm for adulthood. You won’t wake up one day and realise that you are now wise and able to handle everything.
The terrifying truth is that there are not enough adults in the room. You are, in some broad sense, the most responsible individual here. No one can tell you what to do. So now you have to take ownership of these problems and personally see to it that they are solved. It’s terrifying, but it’s necessary.
In some sense, we’re not going to make it without your help.
It’s been said that EA is vetting constrained, but in some deep sense it’s more like that EA (and the world) is constrained on the amount of people that don’t need to be told what to do.
So build up that skill of making decisions that you would feel comfortable about even if a large amount of people scrutinized you for it. Then start acting as if a large amount of superintelligent and reasonable people are scrutinizing you with the expectation that you will personally take care of everything. If you can handle that pressure, it’s the best prompt I’ve found to get myself to start generating plenty of work to do. Much more than I can do on my own.
Great, I feel less crazy when other people have the same thoughts as me. From my comment a week ago:
I strongly feel this is incorrect. Coordination is incredibly expensive, is already a major pain point and source of duplication and wasted effort, and having lots of self-directed go-getters will make that worse.
Yes. This makes me think of investor Keith Rabois’ notion of “barrels” vs “ammunition”:
The attitude you’re describing reminds me of the attitude that Keith Rabois refers to as a “barrel.”
I’d love to hear why this got downvoted. Am I missing something?
Didn’t downvote but my two cents:
I am unsure about the net value of encouraging people to simply need less management and wait for less approval.
Some (most?) people do need guidance until they are able to run projects independently and successfully, ignoring the need doesn’t make it go away.
The unilateralist’s curse is scary. A lot of decisions about EA network growth and strategy that the core organizations have come to are rather counter-intuitive to most of us until we got the chance to talk it through with someone who has spent significant amounts of time thinking about them.
Even with value-aligned actors, coordination might become close to impossible if we accelerate the amount of nodes without accelerating the development of culture. I currently prefer preserving the option of coordination being possible over “many individuals try different things because coordination seemed too difficult a problem to overcome”.
This seems like good advice. However, I’d say that if your goal is to “advance your EA career”, maybe you should do some goal factoring—what do you want out of an EA career? If what you want is to do as much good as possible, it’s totally possible that your comparative advantage doesn’t look like a standard EA career path.
If what you want is EA street cred, maybe you should just get better at talking about what you’re doing (which might require an internal shift in how you think about what you’re doing).
If you are actually doing something that’s not that great and won’t get you better positioned to do more good later, of course consider changing paths :)
This post influenced my own career to a non-insignificant extent. I am grateful for its existence, and think it’s a great and clear way to think about the problem. As an example, this model of patient spending was the result of me pushing the “get humble” button for a while. This post also stands out to me in that I’ve come back to it again and again.
If I value this post at 0.5% of my career, which I ¿do? ¿there aren’t really 200 posts which have influenced me that much?, it was worth 400 hours of my time, or $4,000 to $40,000 of my money. I probably wouldn’t pay that upfront, but it might plausibly be worth that much when looking back at a career.
I think the above is probably an overestimate, but I think that it points at something true, particularly if the post reached many people, as it probably did (though I’m probably an unrepresentative fan). If a final book is produced out of this 10 year review, I’d want this post to be on it.
Great post! It seems to line up with much of 80K’s advice on building career capital, while adding additional points I don’t think I’ve seen articulated by anyone else (or at least, not so clearly).
This is a really useful thing to point out. During my 2018 application process for EA jobs, I was one of several hundred research applicants to Open Phil and one of very few applicants to several operations or executive-assistant positions at other research organizations. In at least one case, I may have been the only applicant.
Ryan/Tegan: Did you get your “something like thirty times lower” estimate from any particular research organization(s)? Is it a best guess based on your participation in someone’s hiring process, or some other element of your personal experience?
Also really useful. I love lists of ideas for things to work on, and it’s good to consider a wide range of things you could be doing, but the first step you take in any potential long-term project should involve serious analysis of the project’s paths to impact and expected value.
You should also be thinking about ways to test your idea as quickly as you can, if that makes sense for your style of work. The Lean Startup is a classic book on this topic. (I liked it much more than I expected to; it’s not just about the bits that have been quoted ad infinitum by founders.)
--
(I work for CEA, but these views are my own.)
This is an order-of magnitude estimate based on experience at various orgs. I’ve asked to be a research assistant for various top researchers, and generally I’m the only person asking at that time. I’ve rarely heard from researchers that someone has asked to research-assist with them. Some of this is because RA job descriptions are less common but I would guess that there is still an effect even when there are RA job descriptions.
This isn’t really comparing like with like however—in one case you’re doing cold outreach and in others there are established application processes. It might make more sense to compare the demand for researcher positions with e.g. the Toby Ord’s Research Assistant position.
But if your point is that people should be more willing to do cold outreach for research assistant positions like you did, that seems fair.
Were any of the RA positions advertised or were they exclusively cold outreach? I can’t think of times when I’ve seen this sort of position being advertised (context being that I’ve mostly looked at effective animal advocacy research positions, and very occasionally positions at meta EA orgs)
I’m interested if these sorts of “assistant” roles crop up very often, be it in research or otherwise.
If they aren’t formally advertised, do you think that people have to accept very low salaries to have a decent chance of securing a role? If an org/researcher has a need for an assistant, why wouldn’t they have advertised for it?
I hear more people do cold outreach about being a researcher than RA, and my guess is that 3-10x more people apply for researcher than RA jobs even when they are advertised. I think it’s a combination of those two factors.
My recommendation would be that people apply more to RA jobs that are advertised, and also reach out to make opportunities for themselves when they are not.
I think about half of researchers can use research assistants, whether or not they are currently hiring for one. A major reason researchers don’t make research assistant positions available is they don’t expect to find one worth hiring, and so don’t want to incur the administrative burden. Or maybe they don’t feel comfortable asking their bosses for this. But if you are a strong candidate, coldly reaching out may result in you being hired or may trigger a hiring round for that position. Although often strong candidates would be people I have met at an EA conference, that got far in an internship application, or that has been referred to me.
I don’t think the salaries would be any lower than competitive rates.
Thanks for writing this up. Possibly a pedantic comment, but aren’t Outside and Weird the same? I can’t see how my strategies would differ if I was pursuing one rather than the other.
I think I can see the difference. Imagine I’m pursuing a career in criminal justice reform.
Outside might look like me taking a role at the Centre for Social Justice, a Conservative think tank in London. It could also look like me applying for Unlocked, a prison officer graduate programme that awards you an MSc after two years of working in prisons. Those are both unusual choices within the EA community, but pretty normal jobs overall.
Weird might look like me becoming a lawyer and working pro bono on behalf of people who want to sue prisons. Or becoming an activist campaigning for voting rights for prisoners. Or running an RCT on whether people are more likely to leave early for good behaviour/less likely to reoffend if they get weekly visits from a family member.
I see ‘Outside’ as conventional approaches that are just unusual within EA circles. I see ‘Weird’ as finding something niche enough that you could become a leading expert on it within a couple of years. I’m not are if that’s exactly what the authors meant, but that’s my interpretation.
I love this, and I’m sure I’ll link to it when providing career advice. It’s easy to read and remember. Thanks for writing it up!
Thanks for the great post. Ryan, I’m curious how you figured this at an early stage:
I thought that more technical skills were rarer, were neglected in some parts of academia (e.g. in history), and were the main thing holding me back from being able to understand papers about emerging technologies… Also, I asked Carl S, and he thought that if I was to go into research, these would be the best skills to get. Nowadays, one could ask a lot more different people.
How’d you decide to go focus on going into research, even before you decided that developing technical skills would be helpful for that path?
I was influenced at that time by people like Matt Fallshaw and Ben Toner, who thought that for sufficiently good intellectual work, funding would be forthcoming. It seemed like insights were mostly what was needed to reduce existential risks...
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement:
Good post.
Seems like writing blog posts is another possibility. A good fraction of the most prominent EAs appear to have achieved their prominence through writing.
Aha! This made it click for me. I was confused by this whole issue where people can’t get jobs at prestigious EA orgs. Something felt backwards about it.
Let’s say you want to solve some problem in the world and you conclude that the most effective way for you to push on the problem is to take the open research position at organization X.
But you find out that there’s someone even better for that position than you who will take it. Splendid! Now your hands are free to take the only slightly less effective position at organization Y! It’s as if you got a clone for free—now we’re surely getting closer to the solution of the problem than you originally expected!
But again, you find out someone better suited will be taking the position instead of you. Marvelous! So many people are working on the problem; as someone who just wants the problem solved (right?), you couldn’t wish for anything better! Off to the next task on the to-do list—hopefully someone is already taking care of that one as well!
…But, weirdly enough, as people get rejected from position after position, they get more and more frustrated and sullen. How so?
I think it makes more sense to me if, instead of “how can I maximize the amount of progress made on the most important problem”, I model people as asking “how can I achieve prominence in the EA community?” Then, of course, if it’s someone else achieving prominence instead of you, you’re going to get frustrated instead of delighted.
Does this make sense to anyone else or have I read too much Robin Hanson?
I’m broadly sympathetic to this view, though I think another possibility is that people want to maximise personal impact, in a particular sense, and that this leads to optimising for felt personal impact more than actually optimising for amount of overall good produced.
For example, in the context of charitable donations, people seem to strongly prefer that their donation specifically goes to impact producing things rather than overhead that ‘merely’ supports impact producing things and that someone else’s donation goes to cover the overhead. (Gneezy et al, 2014) But, of course, in principle, these scenarios are exactly functionally equivalent.
In the direct work case, I imagine that this kind of intrinsic preference for specifically personal impact, a bias towards over-estimating the importance of impact which an individual themselves brings about and signalling/status considerations/extraneous motivations may all play a role.
I don’t think you read too much Robin Hanson, it clarifies a lot of things :)
In some sense, I don’t even think these people are wrong to be frustrated. You have to satisfy your own needs before you can effectively help others. One of these needs just happens to be the need to feel relevant. And like everything else, this is a systemic problem. EA should try to make people feel relevant if and only if they’re doing good. If doing good doesn’t get you recognition unless you’re in a prestigious organisation, then we have to fix that.
Yes, makes sense.
I would even say something like “iff they’re making an honest attempt at doing good”, because the kids are suffering from enough crippling anxiety as it is :)
Regarding applying to EA organizations, I think we can simply say that the applicants are doing good by applying. Many of the orgs have explicitly said they want lots of applicants—the applicants aren’t wasting the orgs’ time, but helping them get better candidates (in addition to learning a lot through the process, etc).
Yup. See @sdspikes comment above.