I work at 80,000 Hours, talking to people about their careers; opinions I share here are my own.
Abby Hoskin
These are great things to check! It’s especially important to do this kind of due diligence if you’re leaving your support network behind (e.g. moving country). Thanks for spelling things out for people new to the job market ❤️
Thanks so much for sharing this, Michelle! It’s always strange to visit our past selves, remembering who we used to be and thinking about all of the versions of ourselves we chose not to become.
I’m glad you became who you are now ❤️
Hahaha, thanks for posting!! :)
This is a really interesting question! Unfortunately, it was posted a little too late for me to run it by the team to answer. Hopefully other people interested in this topic can weigh in here. This 80k podcast episode might be relevant? https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/
This is an interesting idea! I don’t know the answer.
Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!
On calls, the way I do this is not assume people are part of the EA community, and instead see what their personal mindset is when it comes to doing good.
I think 80k advisors give good advice. So I hope people take it seriously but not follow it blindly.
Giving good advice is really hard, and you should seek it out from many different sources.
You also know yourself better than we do; people are unique and complicated, so if we give you advice that simply doesn’t apply to your personal situation, you should do something else. We are also flawed human beings, and sometimes make mistakes. Personally, I was miscalibrated on how hard it is to get technical AI safety roles, and I think I was overly optimistic about acceptance rates at different orgs. I feel really badly about this (my mistakes were pointed out by another advisor and I’ve since course corrected), just being explicit that we do make mistakes!
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.
Alex Lawsen, my ex-supervisor who just left us for Open Phil (miss ya 😭), recently released a great 80k After Hours Podcast on the top 10 mistakes people make! Check it out here: https://80000hours.org/after-hours-podcast/episodes/alex-lawsen-10-career-mistakes/
We had a great advising team chat the other day about “sacrificing yourself on the altar of impact”. Basically, we talk to a lot of people who feel like they need to sacrifice their personal health and happiness in order to make the world a better place.
The advising team would actually prefer for people to build lives that are sustainable; they make enough money to meet their needs, they have somewhere safe to live, their work environment is supportive and non-toxic, etc. We think that setting up a lifestyle where you can comfortably work in the long term (and not quickly flame out) is probably best for having a greater positive impact.
Another thing I talk about on calls a lot is: the job market can be super competitive. Don’t over update on the strength of your CV if you only apply to two places and get rejected. You should probably not conclude much until you get rejected without an interview 10 times (this number is somewhat arbitrary, but a reasonable rule of thumb). If you keep getting rejected with no interviews, then it makes sense to upskill in industry before working in a directly impactful role; this was the path to impact for a huge number of our most productive community members, and should not be perceived negatively! Job applications can also be noisy, so if you want to work an ambitious job you probably need to be applying widely and expect to get quite a few rejections. Luisa Rodriguez has a great piece on dealing with rejection. One line I like a lot is: “If I’m not getting rejected, I’m not being ambitious enough.”
I love my job so much! I talk to kind hearted people who want to save the world all day, what could be better?
I guess people sometimes assume we meet people in person, but almost all of our calls are on Zoom.
Also, sometimes people think advising is about communicating “80k’s institutional views”, which is not really the case; it’s more about helping people think through things themselves and offering help/advice tailored to the specific person we’re talking to. This is a big difference between advising and web content; the latter has to be aimed towards a general audience or at least large swathes of people.
One last thing I’ll add here is that I’ve been a full time advisor for less than a year, but I’ve already spoken to over 200 people. All of these people are welcome to contact me after our call if new questions/decisions pop up. Plus I talk to more new people each week. So I spend a *lot* of time answering emails.
Yeah, I always feel bad when people who want to do good get rejected from advising. In general, you should not update too much on getting rejected from advising. We decide not to invite people for calls for many reasons. For example, there are some people who are doing great work who aren’t at a place yet where we think we can be much help, such as freshmen who would benefit more from reading the (free!) 80,000 Hours career guide than speaking to an advisor for half an hour.
Also, you can totally apply again 6 months after your initial application and we will not consider it the least bit spammy. (I’ve spoken to many people who got rejected the first time they applied!)
Another thing to consider is that a lot of the value from the call can be captured by doing these things:Read our online career guide
Take time to reflect on your values and career. Give yourself 1 hour of dedicated time to do this. Fill out the doc that we would have gone through during the call here: Career Reflection Template
Send your answers on the doc to somebody you trust to get feedback on how you’re thinking through things.
Sudhanshu is quite keen on this, haha! I hope that at the moment our advisors are more clever and give better advice than GPT-4. But keeping my eye out for Gemini ;) Seriously though, it seems like an advising chat bot is a very big project to get right, and we don’t currently have the capacity.
This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions. Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could have some comparative advantage, but to ultimately defer to the advisee on their preferences; we’re big believers in people doing what they’re actually motivated to do. We don’t think it’s sustainable in the long term to work on something that you’re not so interested in.
I also don’t think we track what % of people *we* think should go into AI safety. We don’t think everybody should be working on our top problems (again see “do you think everyone should work on your top list of world problems” https://80000hours.org/problem-profiles/#problems-faq). But AI risk is the world problem we rank as most pressing, and we’re very excited about helping people productively work on in this area. if somebody isn’t excited by it or doesn’t seem like a good fit, we will discuss what they’re interested in instead. Some members of our team are people who considered AI safety as a career path but realised it’s not for them — so we’re very sympathetic to this! For example, I applied for a job at an AI Safety lab and was rejected.
Re: calls not being worth people’s time, on a 7 point scale (1 = “useless”, 4 = “somewhat useful”, 7 = “really useful”) most of my advisees consider their calls to be useful; 97% said their call was at least somewhat useful (aka at least a 4⁄7), 75% rated it as a 6⁄7 or 7⁄7. So it seems like a reasonable way to spend a couple of hours (between prep/call/reflection) of your life ;)
Studying economics opens up different doors than studying computer science. I think econ is pretty cool; our world is incredibly complicated, but economic forces shape our lives. Economic forces inform global power conflict, the different aims and outcomes of similar sounding social movements in different countries, and often the complex incentive structures behind our world’s most pressing problems. So studying economics can really help you understand why the world is the way it is, and potentially give you insights into effective solutions. It’s often a good background for entering policy careers, which can be really broadly impactful, though you may benefit from additional credentials, like a master’s. It also opens up some earning to give opportunities that let you stay neutral and dynamically direct your annual donations to whatever cause you find most pressing or opportunities you see as most promising. So I think you can do cool research at a think tank and/or standard E2G stuff in finance with just a bachelors in economics.
Mid-career professionals are great; you actually have specific skills and a track record of getting things done! One thing to consider is looking through our job board, filtering for jobs that need mid/senior levels of experience, and applying for anything that looks exciting to you. As of me writing this answer, we have 392 jobs open for mid/senior level professionals. Lots of opportunities to do good :)
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger than the spread between problems.
I don’t have good statistics on what cause areas people are interested in when they first apply for coaching versus what we discuss on the call or what they end up pursuing. Anecdotally, if somebody applies for coaching but feels good about their role/the progress they’re making, I usually won’t strongly encourage them to work on something else. But if somebody is working on AI Safety and is burnt out, I would definitely explore other options. (Can’t speak confidently on the frequency with which this happens, sorry!) People with skills in this area will be able to contribute in a lot of different ways.
We also speak to people who did a big round of applications to AI Safety orgs, didn’t make much progress, and want to think through what to do next. In this case, we would discuss ways to invest in yourself, sometimes via more school, more industry work, or trying to have an impact in something other than AI safety.
This so interesting, thanks for writing this up, Jess! As one of your 80k coworkers, I’m always blown away by how organized and detail oriented you are. Reading about your general approach to solving problems/mindset about your job; I’m not surprised that you’re always trying to anticipate how to improve processes for the team, but it’s still super impressive!
To others reading this post: I also endorse 80k as a cool place to work ;)