Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger than the spread between problems.
I don’t have good statistics on what cause areas people are interested in when they first apply for coaching versus what we discuss on the call or what they end up pursuing. Anecdotally, if somebody applies for coaching but feels good about their role/the progress they’re making, I usually won’t strongly encourage them to work on something else. But if somebody is working on AI Safety and is burnt out, I would definitely explore other options. (Can’t speak confidently on the frequency with which this happens, sorry!) People with skills in this area will be able to contribute in a lot of different ways.
We also speak to people who did a big round of applications to AI Safety orgs, didn’t make much progress, and want to think through what to do next. In this case, we would discuss ways to invest in yourself, sometimes via more school, more industry work, or trying to have an impact in something other than AI safety.
Mid-career professionals are great; you actually have specific skills and a track record of getting things done! One thing to consider is looking through our job board, filtering for jobs that need mid/senior levels of experience, and applying for anything that looks exciting to you. As of me writing this answer, we have 392 jobs open for mid/senior level professionals. Lots of opportunities to do good :)
Most of our advice on actually having an impact — rather than building career capital — is highly relevant to mid-career professionals. That’s because they’re entering their third career stage (https://80000hours.org/career-guide/career-planning/#three-career-stages), i.e. actually trying to have an impact. When you’re mid-career, it’s much more important to appropriately:
Pick a problem
Find a cost-effective way of solving that problem that fits your skills
Avoid doing harm
So we hope mid-career people can get a lot out of reading our articles. I’d probably in particular suggest reading our advanced series (https://80000hours.org/advanced-series/).
Perhaps surprisingly (and perhaps not as relevant to this audience): take cause prioritization seriously, or more generally, have clarity about your ultimate goals/what you’ll look to to know whether you’ve made good decisions after the fact.
It’s very common that someone wants to do X, I ask them why, they give an answer that doesn’t point to their ultimate priorities in life, I ask them “why [thing you pointed to]?” and they more or less draw a blank/fumble around uncertainly. Granted it’s a big question, but it’s your life, have a sense of what you’re trying to do at a fundamental level.
Don’t be too fixated on instant impact. Take good opportunities as they come of course, but people are often drawn towards things that sound good/ambitious for the problems of the moment even though they might not be best positioned to tackle those things and might burn a lot of future opportunities by doing so. Details will vary by situation of course.
We had a great advising team chat the other day about “sacrificing yourself on the altar of impact”. Basically, we talk to a lot of people who feel like they need to sacrifice their personal health and happiness in order to make the world a better place.
The advising team would actually prefer for people to build lives that are sustainable; they make enough money to meet their needs, they have somewhere safe to live, their work environment is supportive and non-toxic, etc. We think that setting up a lifestyle where you can comfortably work in the long term (and not quickly flame out) is probably best for having a greater positive impact.
Another thing I talk about on calls a lot is: the job market can be super competitive. Don’t over update on the strength of your CV if you only apply to two places and get rejected. You should probably not conclude much until you get rejected without an interview 10 times (this number is somewhat arbitrary, but a reasonable rule of thumb). If you keep getting rejected with no interviews, then it makes sense to upskill in industry before working in a directly impactful role; this was the path to impact for a huge number of our most productive community members, and should not be perceived negatively! Job applications can also be noisy, so if you want to work an ambitious job you probably need to be applying widely and expect to get quite a few rejections. Luisa Rodriguez has a great piece on dealing with rejection. One line I like a lot is: “If I’m not getting rejected, I’m not being ambitious enough.”
A lot of people have gotten the message: “Direct your career towards AI Safety!” from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others’ comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!
I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.
My guess is that in a lot of cases, the root cause of negative feelings here is going to be something like perfectionism. I certainly felt disenchanted when I wasn’t able to make as much progress on AI as I would have liked. But I also felt disenchanted when I wasn’t able to make much progress on ethics, or being more conscientious, or being a better dancer. I think EA does some combination of attracting perfectionists, and exacerbating their tendencies. My colleagues have put together some great material on this, and other mental health issues:
That said, even if you have a healthy relationship with failure/rejection, feeling competent is really important for most people. If you’re feeling burnt out, I’d encourage you to explore more and focus on building aptitudes. When I felt AI research wasn’t for me, I explored research in other areas, community building, earning to give, and others. I also kept building my fundamental skills, like communication, analysis and organisation. I didn’t know where I would be applying these skills, but I knew that they’d be useful somewhere.
Hey, it’s not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it’s one that frequently came up in some form or other when I was advising), in particular, I’d recommend skimming the sections on ‘trying to have an impact right now’, ‘needing to work on AI immediately’, and ‘ignoring conventional career wisdom’.
People are often surprised that full time advisors only do ~400 calls/year as opposed to something like 5 calls/day (i.e.1,300/yr). For one thing, my BOTEC on the average focus time for an individual advisee is 2.25 hours (between, call prep, the call itself, post-call notes/research on new questions, introduction admin, and answering follow up emails). Beyond that, we have to keep up with what’s going on in the world and the job markets we track, as well as skilling up as generalist advisors. There’s also more formal systems we need to contribute to like marketing, impact assessment, and maintaining the systems that get us all the information we use to help advisees and keep that 2.25 hours at 2.25 hours.
I love my job so much! I talk to kind hearted people who want to save the world all day, what could be better?
I guess people sometimes assume we meet people in person, but almost all of our calls are on Zoom.
Also, sometimes people think advising is about communicating “80k’s institutional views”, which is not really the case; it’s more about helping people think through things themselves and offering help/advice tailored to the specific person we’re talking to. This is a big difference between advising and web content; the latter has to be aimed towards a general audience or at least large swathes of people.
One last thing I’ll add here is that I’ve been a full time advisor for less than a year, but I’ve already spoken to over 200 people. All of these people are welcome to contact me after our call if new questions/decisions pop up. Plus I talk to more new people each week. So I spend a *lot* of time answering emails.
Do you have approximate statistics on the percentage distribution of paths you most commonly recommend during your 1-1 calls? In particular AI Safety related vs anything else, and in AI Safety working at top labs vs policy vs theoretical research. For example: “we recommend 1% of people in our calls to consider work in something climate-related, 50% consider work in AI Safety at OpenAI/other top labs, 50% to consider work in AI-policy, 20% to consider work in biosecurity, 30% in EA meta, 5% in earning to give, …”
I ask because I heard the meme that “80,000hours calls are not worth the time, they just tell everyone to go into AI safety”. I think it’s not true, but I would like to have some data to refute it.
This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions. Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could have some comparative advantage, but to ultimately defer to the advisee on their preferences; we’re big believers in people doing what they’re actually motivated to do. We don’t think it’s sustainable in the long term to work on something that you’re not so interested in.
I also don’t think we track what % of people *we* think should go into AI safety. We don’t think everybody should be working on our top problems (again see “do you think everyone should work on your top list of world problems” https://80000hours.org/problem-profiles/#problems-faq). But AI risk is the world problem we rank as most pressing, and we’re very excited about helping people productively work on in this area. if somebody isn’t excited by it or doesn’t seem like a good fit, we will discuss what they’re interested in instead. Some members of our team are people who considered AI safety as a career path but realised it’s not for them — so we’re very sympathetic to this! For example, I applied for a job at an AI Safety lab and was rejected.
Re: calls not being worth people’s time, on a 7 point scale (1 = “useless”, 4 = “somewhat useful”, 7 = “really useful”) most of my advisees consider their calls to be useful; 97% said their call was at least somewhat useful (aka at least a 4⁄7), 75% rated it as a 6⁄7 or 7⁄7. So it seems like a reasonable way to spend a couple of hours (between prep/call/reflection) of your life ;)
Arden here—I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!
We have several different programmes, which face different bottlenecks. I’ll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the “current challenges” sections for each programme (though that’s from some months ago).
Some current bottlenecks:
More writing and research capacity to further improve our online career advice and keep it up to date.
Better web analytics – we have trouble getting good data on what different groups of users like most and what works best in marketing, so aren’t able to iterate and scale as decisively as we’d like.
More great advisors to add our one-on-one team, so we can do more calls – in fact, we’re hiring for this right now!
There are uncertainties about the world that create strategic uncertainties for the organisation as a whole—e.g. what we should expect to happen with TAI and when. These affect the content of our careers advice as well as overall things like ‘which audiences should the different programmes focus on?’ (For example, in the AI timelines case, if we were confident in very short timelines it’d suggest focusing on older audiences, all else equal).
We’re also a growing, mid-sized org, so have to spend more time on processes and coordination than we used to which takes time. Though we’re making good progress here (e.g. we’re training up a new set of “middle managers” to scale our programmes).
Tracking and evaluating our impact – to know what’s working well and where to invest less – is always challenging, as impacts on people’s careers are hard to find out about, often take years, and sometimes difficult to evaluate. This means our feedback loops aren’t as strong as would be ideal for making plans and evolving our strategy.
I think there are themes around time/capacity, feedback loops, and empirical uncertainties, some of which are a matter of spending more research time, some of which are harder to make progress on.
I’m curious what your perspective is on the value of economics as a major for those who don’t wish to pursue a PhD? In particular I’m curious about the following excerpt on choosing a major from https://80000hours.org/articles/college-advice/
“Putting all this together, and holding all else equal:
We think it’s reasonable to aim for the most fundamental, quantitative option you can do, i.e. one of these in the following order: mathematics, economics, computer science, physics, engineering, political science/chemistry/biology (the last three are roughly equal).”
Personally I would’ve considered computer science, physics and engineering to be more quantitive than economics. Also in my experience these are considered harder majors as well, thus sending a stronger signal to employers.
(Disclaimer: I am studying economics myself, so perhaps I’m looking for some reassurance :))
Studying economics opens up different doors than studying computer science. I think econ is pretty cool; our world is incredibly complicated, but economic forces shape our lives. Economic forces inform global power conflict, the different aims and outcomes of similar sounding social movements in different countries, and often the complex incentive structures behind our world’s most pressing problems. So studying economics can really help you understand why the world is the way it is, and potentially give you insights into effective solutions. It’s often a good background for entering policy careers, which can be really broadly impactful, though you may benefit from additional credentials, like a master’s. It also opens up some earning to give opportunities that let you stay neutral and dynamically direct your annual donations to whatever cause you find most pressing or opportunities you see as most promising. So I think you can do cool research at a think tank and/or standard E2G stuff in finance with just a bachelors in economics.
As an early career executive assistant, I’m watching the world of admin/ops work rapidly change—currently, for the better due to increased efficiency—with AI tool adoption. I want to avoid wasting time learning skills or gaining experience in a role that will become obsolete in the near to medium term future. Does 80K Hours have advice for strategically thinking about upskilling, specializing, and/or pivoting to alternate career paths? Do you foresee roles like personal assistants remaining relevant and impactful (if at high-impact organizations)?
I’m no longer on the team but my hot take here is that a good bet is just going to be trying really hard to work out which tools you can use to accelerate/automate/improve your work. This interview with Riley Goodside might be interesting to listen to, not only for tips on how to get more out of AI tools, but also to hear about how the work he does in prompting those tools has rapidly changed, but that he’s stayed on the frontier because the things he learned have transferred.
What is your thinking on how people should think about their intelligence when it comes to pursuing careers in AI safety? Also, what do you think about this in terms of field building?
I think that there are a lot of people who are “smart” but may not be super-geniuses like the next von Neumann or Einstein who might be interested in pursuing AI safety work, but are uncertain about how much impact they can really have. In particular, I can envision cases where one might enjoy thinking about thought experiments, reading research on the AI Alignment Forum, writing their own arguments, etc, but they might not be making valuable output for a year or more. (At the same time, I know there are cases where someone could become really productive whilst taking a year or more to reach this point.) What advice would you give to this kind of person in thinking about career choice? I am also curious how you think about outreach strategies for getting people into AI safety work. For example, the balance between trying to get the word out as much as possible and keeping outreach to lower scales so that only people who are really capable would be likely of learning about careers in AI safety.
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.
It’s not a full answer but I think the section of my discussion with Luisa Rodriguez on ‘not trying hard enough to fail’ might be interesting to read/listen to if you’re wondering about this.
In the coming years, do you plan on making a questionnaire to determine career paths, similar to Giving What We Can? Or maybe something similar but different?
80k seems to exist in a strange equilibrium where they are always asking more people to apply but also rejecting many people. The revealed preferences here are so clear as to be pretty cutting at times. Is there a way that people can orient towards applying even though there is a high chance of rejection?
I was rejected from career advising when I applied! So I definitely am aware it can be costly. I won’t name names, but I also know of some other people who have gone on to have successful careers in the space who were rejected. Sometimes, this is because reviewing is hard, and we make mistakes. Sometimes, this is because the thing the applicant needs most is to just read more of 80k’s broad advice before trying to tailor it specifically to them. We’re trying to use our time as best we can and to provide support to the people who would most benefit from our advice, so if we can cast a wider net and get more of those people to apply, we want to do that. But I hope we can minimize these costs anyone experiences. I know some people benefit just from thinking through the questions in the application, and we’ve updated the application to make it less work for people. And we really encourage people not to take it as a strong negative signal if they don’t get an advising call — I’d appreciate any additional suggestions on how to convey this message!
> Is there a way that people can orient towards applying even though there is a high chance of rejection?
While it’s easier said than done, I’d try to think of applying as being mostly upside—the application is a useful exercise for career planning in and of itself, and then if we think it makes sense to have a call, you’ll get some extra advice.
Yeah, I always feel bad when people who want to do good get rejected from advising. In general, you should not update too much on getting rejected from advising. We decide not to invite people for calls for many reasons. For example, there are some people who are doing great work who aren’t at a place yet where we think we can be much help, such as freshmen who would benefit more from reading the (free!) 80,000 Hours career guide than speaking to an advisor for half an hour.
Also, you can totally apply again 6 months after your initial application and we will not consider it the least bit spammy. (I’ve spoken to many people who got rejected the first time they applied!)
Another thing to consider is that a lot of the value from the call can be captured by doing these things:
Read our online career guide
Take time to reflect on your values and career. Give yourself 1 hour of dedicated time to do this. Fill out the doc that we would have gone through during the call here: Career Reflection Template
Send your answers on the doc to somebody you trust to get feedback on how you’re thinking through things.
Some of these are low-quality questions. Hopefully they contain some useful data about the type of thoughts some people have, though. I left even the low-quality ones if ever they are useful, but don’t feel forced to read beyond the bolded beginning, I don’t want to waste your time.
What is 80,000 hours’ official timeline? Take-off speed scenario? I ask this question to ask how much time you guys think you’re operating on. This affects some earn-to-give scenarios like “should I look for a scalable career in which it might take years, but I could be reliably making millions by the end of that time?” versus closer-scale scenarios like “become a lawyer next year and donate to alignment think tanks now.”
How worried should I be about local effects of narrow AI? The coordination problem of humanity as well as how much attention is given to alignment and how much attention is given to other EA projects like malaria prevention or biosecurity are things that matter a lot. They could be radically affected by short-term effects of narrow AI, like, say, propaganda machines with LLMs or bioweapon factories with protein folders. Is enough attention allocated to short-term AI effects? Everybody talks about alignment, which is the real problem we need to solve, but all the little obstacles we’ll face on the way will matter a lot as well because they affect how alignment goes!
Does AI constrict debate a bit? What I mean by this is: most questions here are somewhat related to AI and so are most EA thinking efforts I know of. It just seems to be that AI swallows every other cause up. Is this a problem? Because it’s a highly technical subject, are you too swamped with people who want to help in the best way, discover that most things that are not AGI don’t really matter because of how much the latter shapes literally everything else, but simply wouldn’t be very useful in the field? Nah nevermind this isn’t a clear question. This might be better: is there such a thing as too-much-AI burnout? For EAs, should a little bit of a break exist, something which is still important, only a little less, with which they could concentrate on, if only because they will go a little insane concentrating on AI only? Hm.
What is the most scalable form of altruism that you’ve found? Starting a company and hopefully making a lot of money down the line might be pretty scalable—given enough time, your yearly donations could be in the millions, not thousands. Writing a book, writing blog posts, making YouTube videos or starting a media empire to spread the EA memeplex would also be a scalable form of altruism, benefiting the ideas that save the most lives. AI alignment work is (and technically also capabilities, but capabilities is worse than useless without alignment) scalable, in a way, because once friendly AGI is created pretty much every other problem humanity faces melts away. Thanks to your studies and thinking, which method, out of these or more that you know, might be the most scalable form of altruism you can imagine?
What book out of the three free books you offer should I give to a friend? I have not yet read the 80,000 Hours guide and nor have I read Doing Good Better, but I have read The Precipice. I want to see if I can convert a align a friend to EA ideas by having them read a book, not sure which one is the best though. Do you have any suggestions? Thanks for offering the book free, by the way! I’m a high-schooler and don’t even have a bank account, so this is very valuable.
How is the Kurzgesagt team? I know that question looks out of nowhere and you probably aren’t responsible for whatever part of 80K Hours takes cares of the PR, but I noticed that you sponsored the Kurzgesagt video about Botlzmann brains that came out today. I’ve noticed that over time, Kurzgesagt seems to have become more and more aligned with the EA style of thinking. Have you met the team personally? What ambitions do they have? Are they planning on collaborating with EA organizations in the far future, or this is just part of one “batch” of videos? Are they planning on a specifically-about-altruism video soon? Or, more importantly: Kurzgesagt does not have any videos on AGI, the alignment problem, or existential threats in general (despite flirting with bioweapons, nukes and climate change). Are they planning on one?
How important is PR to you and do you have future plans for PR scalability? As in, do you have a plan for racking up an order of magnitude more readers/followers/newsletter subscribers/whatever or not? Should you? Have you thought about the question enough to establish it wouldn’t be worth the effort/time/money? Is there any way people on here could help? I don’t know what you guys use to measure utilons/QALY, but how have you tried calculating the dollar-to-good ratio of PR efforts on your part?
Do you think most people, if well explained, would agree with EA reasoning? Or is there a more fundamental human-have-different-enough-values thing going on? People care about things like other humans and animals, only some things like scope insensitivity stop them from spending every second of their time trying to do as much altruism as possible. Do you think it’s just that? Do you think for the average person it might only take a single book seriously read, or a few blog posts/videos for them to embark on the path that leads toward using their career for good in an effective manner? How much do you guys think about this?
I’ll probably think of more questions if I continue thinking about this, but I’ll stop it here. You probably won’t get all the way down to this comment anyway, I posted pretty late and this won’t get upvoted much. But thanks for the post anyway, it had me thinking about this kind of things! Good day!
Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!
What processes do you have in place to monitor potentially harmful advice advisees may be given on calls, and counteracting nonreporting due to anonymity concerns?
Ideally, could you share a representative example of how cases like this and what procedure your team followed?
Advisors often only have a few pages of context and a single call (sometimes there are follow-ups) to talk about career options. In my experience, this can be pretty insufficient to understand someone’s needs.
I would be worried that they might push people towards something that may not make sense, and two things could happen: 1) the person may feel more pressure to pursue something that’s not a good fit for them 2) if they disagree with the advice given, the may not raise it. For example, they may not feel comfortable raising the issue because of concerns around anonymity and potential career harm, since your advisors are often making valuable connections and sharing potential candidate names with orgs that are hiring.
I know that 80K don’t want people to take their advice so seriously, and numerous posts have been written on this topic. However, I think these efforts won’t necessarily negate 1) and 2) because many 80K advisees may not be as familiar with all of 80K’s content or Forum discourse, and the prospect of valuable connections remains nonetheless.
I personally had a positive experience during a career call, but have heard of a handful of negative experiences second- and third-hand.
[I left 80k ~a month ago, and am writing this in a personal capacity, though I showed a draft of this answer to Michelle (who runs the team) before posting and she agrees it provides an accurate representation. Before I left, I was line-managing the 4 advisors, two of whom I also hired.]
Hey, I wanted to chime in with a couple of thoughts on your followup, and then answer the first question (what mechanisms do we have in place to prevent this). Most of the thoughts on the followup can be summarised by ‘yeah, I think doing advising well is really hard’.
Advisors often only have a few pages of context and a single call (sometimes there are follow-ups) to talk about career options. In my experience, this can be pretty insufficient to understand someone’s needs.
Yep, that’s roughly right. Often it’s less than this! Not everyone takes as much time to fill in the preparation materials as it sounds like you did. One of the things I frequently emphasised when hiring for and training advisors was asking good questions at the start of the call to fill in gaps in their understanding, check it with the advisee, and then quickly arrive at a working model that was good enough to proceed with. Even then, this isn’t always going to be perfect. In my experience, advisors tend to do a pretty good job of linking the takes they give to the reasons they’re giving them (where, roughly speaking, many of those reasons will be aspects of their current understanding of the person they’re advising).
the person may feel more pressure to pursue something that’s not a good fit for them
With obvious caveats about selection effects, many of my advisees expressed that they were positively surprised at me relieving this kind of pressure! In my experience advisors spend a lot more time reassuring people that they can let go of some of the pressure they’re perceiving than the inverse (it was, for example, a recurring theme in the podcast I recently released).
if they disagree with the advice given, the may not raise it. For example, they may not feel comfortable raising the issue because of concerns around anonymity and potential career harm, since your advisors are often making valuable connections and sharing potential candidate names with orgs that are hiring.
This is tricky to respond to. I care a lot that advisees are in fact not at risk of being de-anonymised, slandered, or otherwise harmed in their career ambitions as a result of speaking to us, and I’m happy to say that I believe this is the case. It’s possible, of course, for advisees to believe that they are at risk here, and for that reason or several possible other reasons, to give answers that they think advisors want to hear rather than answers that are an honest reflection of what they think. I think this is usually fairly easy for advisors to pick up on (especially when it’s for reasons of embarrassment/low confidence), at which point the best thing for them to do is provide some reassurance about this.
I do think that, at some point, the burden of responsibility is no longer on the advisor. If someone successfully convinces an advisor they they would really enjoy role A, or really want to work on cause Z, because they think that’s what the advisor wants to hear, or they think that’s what will get them recommended for the best roles, or introduced to the coolest people, or whatever, and the advisor then gives them advice that follows from those things being true, I think that advice is likely to be bad advice for that person, and potentially harmful if they follow it literally. I’m glad that advisors are (as far as I can tell), quite hard to mislead in this way, but I don’t think they should feel guilty if they miss some cases like this.
I know that 80K don’t want people to take their advice so seriously, and numerous posts have been written on this topic. However, I think these efforts won’t necessarily negate 1) and 2) because many 80K advisees may not be as familiar with all of 80K’s content or Forum discourse, and the prospect of valuable connections remains nonetheless.
There might be a slight miscommunication here. Several of the posts (and my recent podcast interview) talking about how people shouldn’t take 80k’s advice so seriously are, I think, not really pointing at a situation where people get on a 1on1 call and then take the advisor’s word as gospel, but more at things like reading a website that’s aimed at a really broad audience, and trying to follow it to the letter despite it very clearly being the case that no single piece of advice applies equally to everyone. The sort of advice people get on calls is much more frequently a suggestion of next steps/tests/hypotheses to investigate/things to read than “ok here is your career path for the next 10 years”, along with the reasoning behind those suggestions. I don’t want to uncritically recommend deferring to anyone on important life decisions, but on the current margin I don’t think I’d advocate for advisees taking that kind of advice, expressed with appropriate nuance, less seriously.
OK, but what specific things are in place to catch potential harm?
There are a few things that I think are protective here, some of which I’ll list below, though this list isn’t exhaustive.
Internal quality assurance of calls
The overwhelming majority of calls we have are recorded (with permission), and many of these are shared for feedback with other staff at the organisation (also with permission). To give some idea of scale, I checked some notes and estimated that (including trials, and sitting in on calls with new advisors or triallists) I gave substantive feedback on over 100 calls, the majority of which were in the last year. I was on the high end for the team, though everyone in 80k is able to give feedback, not only advisors.
I would expect anyone listening to a call in this capacity to flag, as a priority, anything that seemed like and advisor saying something harmful, be that because it was false, displayed an inappropriate level of confidence, or because it was insensitive.
My overall impression is that this happens extremely rarely, and that the bar for giving feedback about this kind of concern was (correctly) extremely low. I’m personally grateful, for example, for some feedback a colleague gave me about how my tone might have been perceived as ‘teacher-y’ on one call I did, and another case where someone flagged that they thought the advisee might have felt intimidated by the start of the conversation. In both cases, as far as I can remember, the colleague in question thought that the advisee probably hadn’t interpreted the situation in the way they were flagging, but that it was worth being careful in future. I mention this not to indicate that I never made mistakes on calls, but instead to illustrate why I think it’s unlikely that feedback would miss significant amounts of potentially harmful advice.
Advisee feedback mechanisms
There are multiple opportunities for people we’ve advised to give feedback about all aspects of the process, including specific prompts about the quality of advice they received on the call, any introductions we made, and any potential harms.
Some of these opportunities include the option for the advisee to remain anonymous, and we’re careful about accidentally collecting de-anonymising information, though no system is foolproof. As one example, we don’t give an option to remain anonymous in the feedback form we send immediately after the call (as depending on how many other calls were happening at the time, someone filling it in straight away might be easy to notice), but we do give this option in later follow-up surveys (where the timing won’t reveal identity).
In user feedback, the most common reason given by people who said 1on1 caused them harm is that they were rejected from advising and felt bad/demotivated about that. The absolute numbers here are very low, but there’s an obvious caveat about non-response bias.
On specific investigations/examples
I worked with community health on some ways of preventing harm being done by people advisers made introductions to (including, in some cases, stopping introductions)
I spent more than 5 but less than 10 hours, on two occasions, investigating concerns that had been raised to me about (current or former) advisors, and feel satisfied in both cases that our response was appropriate i.e. that there was not an ongoing risk of harm following the investigation.
Despite my personal bar for taking concerns of this sort seriously being pretty low compared to my guess at the community average (likely because I developed a lot of my intuitions for how to manage such situations during my previous career as a teacher), there were few enough incidents meriting any kind of investigation that I think giving any more details than the above would not be worth the (small) risk of deanonymising those involved. I take promises of confidentiality really seriously (as I hope would be expected for someone in the position advisors have).
Thanks for this in-depth response, it makes me feel more confident in the processes for the period of time when you were at 80K.
However, since you have left the team, it would be helpful to know which of these practices your successor will keep in place and how much they will change—for example, since you mentioned you were on the high end for giving feedback on calls, for example.
My understanding of many meta EA orgs is that individuals have a fair amount of autonomy. This definitely has its upsides, but it also means that practices can change (substantially) between managers.
I wouldn’t expect the attitude of the team to have shifted much in my absence. I learned a huge amount from Michelle, who’s still leading the team, especially about management. To the extent you were impressed with my answers, I think she should take a large amount of the credit.
On feedback specifically, I’ve retained a small (voluntary) advisory role at 80k, and continue to give feedback as part of that, though I also think that the advisors have been deliberately giving more to each other.
The work I mentioned on how we make introductions to others and track the effects of those, including collaborating with CH, was passed on to someone else a couple of months before I left, and in my view the robustness of those processes has improved substantially as a result.
I have seen way too many people not wanting to apply to 80K hours calls because they aren’t EAs or won’t to work in x-risk areas. It almost seems like the message is “80K is an EA-aligned only service.” How is the team approaching this (changes in messaging, for eg?)
Openness to working in existential risk mitigation is not a strict requirement for having a call with us, but it is our top priority and the broad area we know and think most about. EA identity is not-at-all a requirement outside the very broad bounds of wanting to do good and being scope sensitive with regard to that good. Accordingly, I think it’s worth the 10 minutes to apply if you’ve 1) read/listened to some 80k content and found it interesting, and 2) have some genuine uncertainty about your long run career. I think 1) + 2) describe a broad enough range of people that I’m not worried about our potential user base being too small.
So, depending on how you define EA, I might be fine with our current messaging. If people think you need to be a multiple-EAG attendee who wears the heart-lightbulb shirt all the time to get a call, that would be a problem and I’d be interested to know what we’re doing to send that message. When I look at our web content and YouTube ads for example, I’m not worried about being too narrow.
On calls, the way I do this is not assume people are part of the EA community, and instead see what their personal mindset is when it comes to doing good.
I think 80k advisors give good advice. So I hope people take it seriously but not follow it blindly.
Giving good advice is really hard, and you should seek it out from many different sources.
You also know yourself better than we do; people are unique and complicated, so if we give you advice that simply doesn’t apply to your personal situation, you should do something else. We are also flawed human beings, and sometimes make mistakes. Personally, I was miscalibrated on how hard it is to get technical AI safety roles, and I think I was overly optimistic about acceptance rates at different orgs. I feel really badly about this (my mistakes were pointed out by another advisor and I’ve since course corrected), just being explicit that we do make mistakes!
I think a chatbot fails the cost-benefit analysis pretty badly at this point. There are big reputational hits organizations can take for giving bad advice and potential hallucinations just create a lot of surface area there. Importantly, the upside is quite minimal too. If a user wants to, they can pull up ChatGPT and ask it to act as an 80k advisor. It might do okay (or similarly to how okay it would do if we tried to develop one), only it’d be much clearer that we didn’t sanction its output.
Sudhanshu is quite keen on this, haha! I hope that at the moment our advisors are more clever and give better advice than GPT-4. But keeping my eye out for Gemini ;) Seriously though, it seems like an advising chat bot is a very big project to get right, and we don’t currently have the capacity.
How often do you direct someone away from AI Safety to work on something else (say global health and development)?
Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger than the spread between problems.
I don’t have good statistics on what cause areas people are interested in when they first apply for coaching versus what we discuss on the call or what they end up pursuing. Anecdotally, if somebody applies for coaching but feels good about their role/the progress they’re making, I usually won’t strongly encourage them to work on something else. But if somebody is working on AI Safety and is burnt out, I would definitely explore other options. (Can’t speak confidently on the frequency with which this happens, sorry!) People with skills in this area will be able to contribute in a lot of different ways.
We also speak to people who did a big round of applications to AI Safety orgs, didn’t make much progress, and want to think through what to do next. In this case, we would discuss ways to invest in yourself, sometimes via more school, more industry work, or trying to have an impact in something other than AI safety.
(vs how often do you direct someone away from something else to work on AI Safety)
What advice do you have for mid-career professionals who only have ~40K hours left?
Mid-career professionals are great; you actually have specific skills and a track record of getting things done! One thing to consider is looking through our job board, filtering for jobs that need mid/senior levels of experience, and applying for anything that looks exciting to you. As of me writing this answer, we have 392 jobs open for mid/senior level professionals. Lots of opportunities to do good :)
Most of our advice on actually having an impact — rather than building career capital — is highly relevant to mid-career professionals. That’s because they’re entering their third career stage (https://80000hours.org/career-guide/career-planning/#three-career-stages), i.e. actually trying to have an impact. When you’re mid-career, it’s much more important to appropriately:
Pick a problem
Find a cost-effective way of solving that problem that fits your skills
Avoid doing harm
So we hope mid-career people can get a lot out of reading our articles. I’d probably in particular suggest reading our advanced series (https://80000hours.org/advanced-series/).
Thank you, I will definitely check out the advanced series!
What is the most common and broadly applicable advice that advisors give?
Perhaps surprisingly (and perhaps not as relevant to this audience): take cause prioritization seriously, or more generally, have clarity about your ultimate goals/what you’ll look to to know whether you’ve made good decisions after the fact.
It’s very common that someone wants to do X, I ask them why, they give an answer that doesn’t point to their ultimate priorities in life, I ask them “why [thing you pointed to]?” and they more or less draw a blank/fumble around uncertainly. Granted it’s a big question, but it’s your life, have a sense of what you’re trying to do at a fundamental level.
Don’t be too fixated on instant impact. Take good opportunities as they come of course, but people are often drawn towards things that sound good/ambitious for the problems of the moment even though they might not be best positioned to tackle those things and might burn a lot of future opportunities by doing so. Details will vary by situation of course.
We had a great advising team chat the other day about “sacrificing yourself on the altar of impact”. Basically, we talk to a lot of people who feel like they need to sacrifice their personal health and happiness in order to make the world a better place.
The advising team would actually prefer for people to build lives that are sustainable; they make enough money to meet their needs, they have somewhere safe to live, their work environment is supportive and non-toxic, etc. We think that setting up a lifestyle where you can comfortably work in the long term (and not quickly flame out) is probably best for having a greater positive impact.
Another thing I talk about on calls a lot is: the job market can be super competitive. Don’t over update on the strength of your CV if you only apply to two places and get rejected. You should probably not conclude much until you get rejected without an interview 10 times (this number is somewhat arbitrary, but a reasonable rule of thumb). If you keep getting rejected with no interviews, then it makes sense to upskill in industry before working in a directly impactful role; this was the path to impact for a huge number of our most productive community members, and should not be perceived negatively! Job applications can also be noisy, so if you want to work an ambitious job you probably need to be applying widely and expect to get quite a few rejections. Luisa Rodriguez has a great piece on dealing with rejection. One line I like a lot is: “If I’m not getting rejected, I’m not being ambitious enough.”
A lot of people have gotten the message: “Direct your career towards AI Safety!” from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others’ comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!
I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.
My guess is that in a lot of cases, the root cause of negative feelings here is going to be something like perfectionism. I certainly felt disenchanted when I wasn’t able to make as much progress on AI as I would have liked. But I also felt disenchanted when I wasn’t able to make much progress on ethics, or being more conscientious, or being a better dancer. I think EA does some combination of attracting perfectionists, and exacerbating their tendencies. My colleagues have put together some great material on this, and other mental health issues:
Howie’s interview on having a successful career with depression and anxiety
Tim Lebon on how altruistic perfectionism is self-defeating
Luisa on dealing with career rejection and imposter syndrome
That said, even if you have a healthy relationship with failure/rejection, feeling competent is really important for most people. If you’re feeling burnt out, I’d encourage you to explore more and focus on building aptitudes. When I felt AI research wasn’t for me, I explored research in other areas, community building, earning to give, and others. I also kept building my fundamental skills, like communication, analysis and organisation. I didn’t know where I would be applying these skills, but I knew that they’d be useful somewhere.
Hey, it’s not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it’s one that frequently came up in some form or other when I was advising), in particular, I’d recommend skimming the sections on ‘trying to have an impact right now’, ‘needing to work on AI immediately’, and ‘ignoring conventional career wisdom’.
What do you think is the most common mistake people make when searching for a career that has a positive societal impact?
Alex Lawsen, my ex-supervisor who just left us for Open Phil (miss ya 😭), recently released a great 80k After Hours Podcast on the top 10 mistakes people make! Check it out here: https://80000hours.org/after-hours-podcast/episodes/alex-lawsen-10-career-mistakes/
How does it feel to be a member of the 1-on-1 team? What things do you think we get wrong about your experience?
People are often surprised that full time advisors only do ~400 calls/year as opposed to something like 5 calls/day (i.e.1,300/yr). For one thing, my BOTEC on the average focus time for an individual advisee is 2.25 hours (between, call prep, the call itself, post-call notes/research on new questions, introduction admin, and answering follow up emails). Beyond that, we have to keep up with what’s going on in the world and the job markets we track, as well as skilling up as generalist advisors. There’s also more formal systems we need to contribute to like marketing, impact assessment, and maintaining the systems that get us all the information we use to help advisees and keep that 2.25 hours at 2.25 hours.
I love my job so much! I talk to kind hearted people who want to save the world all day, what could be better?
I guess people sometimes assume we meet people in person, but almost all of our calls are on Zoom.
Also, sometimes people think advising is about communicating “80k’s institutional views”, which is not really the case; it’s more about helping people think through things themselves and offering help/advice tailored to the specific person we’re talking to. This is a big difference between advising and web content; the latter has to be aimed towards a general audience or at least large swathes of people.
One last thing I’ll add here is that I’ve been a full time advisor for less than a year, but I’ve already spoken to over 200 people. All of these people are welcome to contact me after our call if new questions/decisions pop up. Plus I talk to more new people each week. So I spend a *lot* of time answering emails.
Do you have approximate statistics on the percentage distribution of paths you most commonly recommend during your 1-1 calls? In particular AI Safety related vs anything else, and in AI Safety working at top labs vs policy vs theoretical research. For example: “we recommend 1% of people in our calls to consider work in something climate-related, 50% consider work in AI Safety at OpenAI/other top labs, 50% to consider work in AI-policy, 20% to consider work in biosecurity, 30% in EA meta, 5% in earning to give, …”
I ask because I heard the meme that “80,000hours calls are not worth the time, they just tell everyone to go into AI safety”. I think it’s not true, but I would like to have some data to refute it.
This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions. Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could have some comparative advantage, but to ultimately defer to the advisee on their preferences; we’re big believers in people doing what they’re actually motivated to do. We don’t think it’s sustainable in the long term to work on something that you’re not so interested in.
I also don’t think we track what % of people *we* think should go into AI safety. We don’t think everybody should be working on our top problems (again see “do you think everyone should work on your top list of world problems” https://80000hours.org/problem-profiles/#problems-faq). But AI risk is the world problem we rank as most pressing, and we’re very excited about helping people productively work on in this area. if somebody isn’t excited by it or doesn’t seem like a good fit, we will discuss what they’re interested in instead. Some members of our team are people who considered AI safety as a career path but realised it’s not for them — so we’re very sympathetic to this! For example, I applied for a job at an AI Safety lab and was rejected.
Re: calls not being worth people’s time, on a 7 point scale (1 = “useless”, 4 = “somewhat useful”, 7 = “really useful”) most of my advisees consider their calls to be useful; 97% said their call was at least somewhat useful (aka at least a 4⁄7), 75% rated it as a 6⁄7 or 7⁄7. So it seems like a reasonable way to spend a couple of hours (between prep/call/reflection) of your life ;)
What are the biggest bottlenecks and/or inefficiencies that impedes 80K from having more impact?
Arden here—I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!
We have several different programmes, which face different bottlenecks. I’ll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the “current challenges” sections for each programme (though that’s from some months ago).
Some current bottlenecks:
More writing and research capacity to further improve our online career advice and keep it up to date.
Better web analytics – we have trouble getting good data on what different groups of users like most and what works best in marketing, so aren’t able to iterate and scale as decisively as we’d like.
More great advisors to add our one-on-one team, so we can do more calls – in fact, we’re hiring for this right now!
There are uncertainties about the world that create strategic uncertainties for the organisation as a whole—e.g. what we should expect to happen with TAI and when. These affect the content of our careers advice as well as overall things like ‘which audiences should the different programmes focus on?’ (For example, in the AI timelines case, if we were confident in very short timelines it’d suggest focusing on older audiences, all else equal).
We’re also a growing, mid-sized org, so have to spend more time on processes and coordination than we used to which takes time. Though we’re making good progress here (e.g. we’re training up a new set of “middle managers” to scale our programmes).
Tracking and evaluating our impact – to know what’s working well and where to invest less – is always challenging, as impacts on people’s careers are hard to find out about, often take years, and sometimes difficult to evaluate. This means our feedback loops aren’t as strong as would be ideal for making plans and evolving our strategy.
I think there are themes around time/capacity, feedback loops, and empirical uncertainties, some of which are a matter of spending more research time, some of which are harder to make progress on.
Hi, and thanks for doing this!
I’m curious what your perspective is on the value of economics as a major for those who don’t wish to pursue a PhD? In particular I’m curious about the following excerpt on choosing a major from https://80000hours.org/articles/college-advice/
“Putting all this together, and holding all else equal:
We think it’s reasonable to aim for the most fundamental, quantitative option you can do, i.e. one of these in the following order: mathematics, economics, computer science, physics, engineering, political science/chemistry/biology (the last three are roughly equal).”
Personally I would’ve considered computer science, physics and engineering to be more quantitive than economics. Also in my experience these are considered harder majors as well, thus sending a stronger signal to employers.
(Disclaimer: I am studying economics myself, so perhaps I’m looking for some reassurance :))
Studying economics opens up different doors than studying computer science. I think econ is pretty cool; our world is incredibly complicated, but economic forces shape our lives. Economic forces inform global power conflict, the different aims and outcomes of similar sounding social movements in different countries, and often the complex incentive structures behind our world’s most pressing problems. So studying economics can really help you understand why the world is the way it is, and potentially give you insights into effective solutions. It’s often a good background for entering policy careers, which can be really broadly impactful, though you may benefit from additional credentials, like a master’s. It also opens up some earning to give opportunities that let you stay neutral and dynamically direct your annual donations to whatever cause you find most pressing or opportunities you see as most promising. So I think you can do cool research at a think tank and/or standard E2G stuff in finance with just a bachelors in economics.
As an early career executive assistant, I’m watching the world of admin/ops work rapidly change—currently, for the better due to increased efficiency—with AI tool adoption. I want to avoid wasting time learning skills or gaining experience in a role that will become obsolete in the near to medium term future. Does 80K Hours have advice for strategically thinking about upskilling, specializing, and/or pivoting to alternate career paths? Do you foresee roles like personal assistants remaining relevant and impactful (if at high-impact organizations)?
I’m no longer on the team but my hot take here is that a good bet is just going to be trying really hard to work out which tools you can use to accelerate/automate/improve your work. This interview with Riley Goodside might be interesting to listen to, not only for tips on how to get more out of AI tools, but also to hear about how the work he does in prompting those tools has rapidly changed, but that he’s stayed on the frontier because the things he learned have transferred.
This is a really interesting question! Unfortunately, it was posted a little too late for me to run it by the team to answer. Hopefully other people interested in this topic can weigh in here. This 80k podcast episode might be relevant? https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/
What is your thinking on how people should think about their intelligence when it comes to pursuing careers in AI safety? Also, what do you think about this in terms of field building?
I think that there are a lot of people who are “smart” but may not be super-geniuses like the next von Neumann or Einstein who might be interested in pursuing AI safety work, but are uncertain about how much impact they can really have. In particular, I can envision cases where one might enjoy thinking about thought experiments, reading research on the AI Alignment Forum, writing their own arguments, etc, but they might not be making valuable output for a year or more. (At the same time, I know there are cases where someone could become really productive whilst taking a year or more to reach this point.) What advice would you give to this kind of person in thinking about career choice? I am also curious how you think about outreach strategies for getting people into AI safety work. For example, the balance between trying to get the word out as much as possible and keeping outreach to lower scales so that only people who are really capable would be likely of learning about careers in AI safety.
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.
It’s not a full answer but I think the section of my discussion with Luisa Rodriguez on ‘not trying hard enough to fail’ might be interesting to read/listen to if you’re wondering about this.
In the coming years, do you plan on making a questionnaire to determine career paths, similar to Giving What We Can? Or maybe something similar but different?
This is an interesting idea! I don’t know the answer.
80k seems to exist in a strange equilibrium where they are always asking more people to apply but also rejecting many people. The revealed preferences here are so clear as to be pretty cutting at times. Is there a way that people can orient towards applying even though there is a high chance of rejection?
I was rejected from career advising when I applied! So I definitely am aware it can be costly. I won’t name names, but I also know of some other people who have gone on to have successful careers in the space who were rejected. Sometimes, this is because reviewing is hard, and we make mistakes. Sometimes, this is because the thing the applicant needs most is to just read more of 80k’s broad advice before trying to tailor it specifically to them. We’re trying to use our time as best we can and to provide support to the people who would most benefit from our advice, so if we can cast a wider net and get more of those people to apply, we want to do that. But I hope we can minimize these costs anyone experiences. I know some people benefit just from thinking through the questions in the application, and we’ve updated the application to make it less work for people. And we really encourage people not to take it as a strong negative signal if they don’t get an advising call — I’d appreciate any additional suggestions on how to convey this message!
> Is there a way that people can orient towards applying even though there is a high chance of rejection?
While it’s easier said than done, I’d try to think of applying as being mostly upside—the application is a useful exercise for career planning in and of itself, and then if we think it makes sense to have a call, you’ll get some extra advice.
Yeah, I always feel bad when people who want to do good get rejected from advising. In general, you should not update too much on getting rejected from advising. We decide not to invite people for calls for many reasons. For example, there are some people who are doing great work who aren’t at a place yet where we think we can be much help, such as freshmen who would benefit more from reading the (free!) 80,000 Hours career guide than speaking to an advisor for half an hour.
Also, you can totally apply again 6 months after your initial application and we will not consider it the least bit spammy. (I’ve spoken to many people who got rejected the first time they applied!)
Another thing to consider is that a lot of the value from the call can be captured by doing these things:
Read our online career guide
Take time to reflect on your values and career. Give yourself 1 hour of dedicated time to do this. Fill out the doc that we would have gone through during the call here: Career Reflection Template
Send your answers on the doc to somebody you trust to get feedback on how you’re thinking through things.
I expect the tradeoff here to work better the easier it is to apply
Some of these are low-quality questions. Hopefully they contain some useful data about the type of thoughts some people have, though. I left even the low-quality ones if ever they are useful, but don’t feel forced to read beyond the bolded beginning, I don’t want to waste your time.
What is 80,000 hours’ official timeline? Take-off speed scenario? I ask this question to ask how much time you guys think you’re operating on. This affects some earn-to-give scenarios like “should I look for a scalable career in which it might take years, but I could be reliably making millions by the end of that time?” versus closer-scale scenarios like “become a lawyer next year and donate to alignment think tanks now.”
How worried should I be about local effects of narrow AI? The coordination problem of humanity as well as how much attention is given to alignment and how much attention is given to other EA projects like malaria prevention or biosecurity are things that matter a lot. They could be radically affected by short-term effects of narrow AI, like, say, propaganda machines with LLMs or bioweapon factories with protein folders. Is enough attention allocated to short-term AI effects? Everybody talks about alignment, which is the real problem we need to solve, but all the little obstacles we’ll face on the way will matter a lot as well because they affect how alignment goes!
Does AI constrict debate a bit? What I mean by this is: most questions here are somewhat related to AI and so are most EA thinking efforts I know of. It just seems to be that AI swallows every other cause up. Is this a problem? Because it’s a highly technical subject, are you too swamped with people who want to help in the best way, discover that most things that are not AGI don’t really matter because of how much the latter shapes literally everything else, but simply wouldn’t be very useful in the field? Nah nevermind this isn’t a clear question. This might be better: is there such a thing as too-much-AI burnout? For EAs, should a little bit of a break exist, something which is still important, only a little less, with which they could concentrate on, if only because they will go a little insane concentrating on AI only? Hm.
What is the most scalable form of altruism that you’ve found? Starting a company and hopefully making a lot of money down the line might be pretty scalable—given enough time, your yearly donations could be in the millions, not thousands. Writing a book, writing blog posts, making YouTube videos or starting a media empire to spread the EA memeplex would also be a scalable form of altruism, benefiting the ideas that save the most lives. AI alignment work is (and technically also capabilities, but capabilities is worse than useless without alignment) scalable, in a way, because once friendly AGI is created pretty much every other problem humanity faces melts away. Thanks to your studies and thinking, which method, out of these or more that you know, might be the most scalable form of altruism you can imagine?
What book out of the three free books you offer should I give to a friend? I have not yet read the 80,000 Hours guide and nor have I read Doing Good Better, but I have read The Precipice. I want to see if I can convert a align a friend to EA ideas by having them read a book, not sure which one is the best though. Do you have any suggestions? Thanks for offering the book free, by the way! I’m a high-schooler and don’t even have a bank account, so this is very valuable.
How is the Kurzgesagt team? I know that question looks out of nowhere and you probably aren’t responsible for whatever part of 80K Hours takes cares of the PR, but I noticed that you sponsored the Kurzgesagt video about Botlzmann brains that came out today. I’ve noticed that over time, Kurzgesagt seems to have become more and more aligned with the EA style of thinking. Have you met the team personally? What ambitions do they have? Are they planning on collaborating with EA organizations in the far future, or this is just part of one “batch” of videos? Are they planning on a specifically-about-altruism video soon? Or, more importantly: Kurzgesagt does not have any videos on AGI, the alignment problem, or existential threats in general (despite flirting with bioweapons, nukes and climate change). Are they planning on one?
How important is PR to you and do you have future plans for PR scalability? As in, do you have a plan for racking up an order of magnitude more readers/followers/newsletter subscribers/whatever or not? Should you? Have you thought about the question enough to establish it wouldn’t be worth the effort/time/money? Is there any way people on here could help? I don’t know what you guys use to measure utilons/QALY, but how have you tried calculating the dollar-to-good ratio of PR efforts on your part?
Do you think most people, if well explained, would agree with EA reasoning? Or is there a more fundamental human-have-different-enough-values thing going on? People care about things like other humans and animals, only some things like scope insensitivity stop them from spending every second of their time trying to do as much altruism as possible. Do you think it’s just that? Do you think for the average person it might only take a single book seriously read, or a few blog posts/videos for them to embark on the path that leads toward using their career for good in an effective manner? How much do you guys think about this?
I’ll probably think of more questions if I continue thinking about this, but I’ll stop it here. You probably won’t get all the way down to this comment anyway, I posted pretty late and this won’t get upvoted much. But thanks for the post anyway, it had me thinking about this kind of things! Good day!
Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!
What processes do you have in place to monitor potentially harmful advice advisees may be given on calls, and counteracting nonreporting due to anonymity concerns?
Ideally, could you share a representative example of how cases like this and what procedure your team followed?
(Additional context in a reply to this comment)
Context:
Advisors often only have a few pages of context and a single call (sometimes there are follow-ups) to talk about career options. In my experience, this can be pretty insufficient to understand someone’s needs.
I would be worried that they might push people towards something that may not make sense, and two things could happen:
1) the person may feel more pressure to pursue something that’s not a good fit for them
2) if they disagree with the advice given, the may not raise it. For example, they may not feel comfortable raising the issue because of concerns around anonymity and potential career harm, since your advisors are often making valuable connections and sharing potential candidate names with orgs that are hiring.
I know that 80K don’t want people to take their advice so seriously, and numerous posts have been written on this topic. However, I think these efforts won’t necessarily negate 1) and 2) because many 80K advisees may not be as familiar with all of 80K’s content or Forum discourse, and the prospect of valuable connections remains nonetheless.
I personally had a positive experience during a career call, but have heard of a handful of negative experiences second- and third-hand.
[I left 80k ~a month ago, and am writing this in a personal capacity, though I showed a draft of this answer to Michelle (who runs the team) before posting and she agrees it provides an accurate representation. Before I left, I was line-managing the 4 advisors, two of whom I also hired.]
Hey, I wanted to chime in with a couple of thoughts on your followup, and then answer the first question (what mechanisms do we have in place to prevent this). Most of the thoughts on the followup can be summarised by ‘yeah, I think doing advising well is really hard’.
Yep, that’s roughly right. Often it’s less than this! Not everyone takes as much time to fill in the preparation materials as it sounds like you did. One of the things I frequently emphasised when hiring for and training advisors was asking good questions at the start of the call to fill in gaps in their understanding, check it with the advisee, and then quickly arrive at a working model that was good enough to proceed with. Even then, this isn’t always going to be perfect. In my experience, advisors tend to do a pretty good job of linking the takes they give to the reasons they’re giving them (where, roughly speaking, many of those reasons will be aspects of their current understanding of the person they’re advising).
With obvious caveats about selection effects, many of my advisees expressed that they were positively surprised at me relieving this kind of pressure! In my experience advisors spend a lot more time reassuring people that they can let go of some of the pressure they’re perceiving than the inverse (it was, for example, a recurring theme in the podcast I recently released).
This is tricky to respond to. I care a lot that advisees are in fact not at risk of being de-anonymised, slandered, or otherwise harmed in their career ambitions as a result of speaking to us, and I’m happy to say that I believe this is the case. It’s possible, of course, for advisees to believe that they are at risk here, and for that reason or several possible other reasons, to give answers that they think advisors want to hear rather than answers that are an honest reflection of what they think. I think this is usually fairly easy for advisors to pick up on (especially when it’s for reasons of embarrassment/low confidence), at which point the best thing for them to do is provide some reassurance about this.
I do think that, at some point, the burden of responsibility is no longer on the advisor. If someone successfully convinces an advisor they they would really enjoy role A, or really want to work on cause Z, because they think that’s what the advisor wants to hear, or they think that’s what will get them recommended for the best roles, or introduced to the coolest people, or whatever, and the advisor then gives them advice that follows from those things being true, I think that advice is likely to be bad advice for that person, and potentially harmful if they follow it literally. I’m glad that advisors are (as far as I can tell), quite hard to mislead in this way, but I don’t think they should feel guilty if they miss some cases like this.
There might be a slight miscommunication here. Several of the posts (and my recent podcast interview) talking about how people shouldn’t take 80k’s advice so seriously are, I think, not really pointing at a situation where people get on a 1on1 call and then take the advisor’s word as gospel, but more at things like reading a website that’s aimed at a really broad audience, and trying to follow it to the letter despite it very clearly being the case that no single piece of advice applies equally to everyone. The sort of advice people get on calls is much more frequently a suggestion of next steps/tests/hypotheses to investigate/things to read than “ok here is your career path for the next 10 years”, along with the reasoning behind those suggestions. I don’t want to uncritically recommend deferring to anyone on important life decisions, but on the current margin I don’t think I’d advocate for advisees taking that kind of advice, expressed with appropriate nuance, less seriously.
OK, but what specific things are in place to catch potential harm?
There are a few things that I think are protective here, some of which I’ll list below, though this list isn’t exhaustive.
Internal quality assurance of calls
The overwhelming majority of calls we have are recorded (with permission), and many of these are shared for feedback with other staff at the organisation (also with permission). To give some idea of scale, I checked some notes and estimated that (including trials, and sitting in on calls with new advisors or triallists) I gave substantive feedback on over 100 calls, the majority of which were in the last year. I was on the high end for the team, though everyone in 80k is able to give feedback, not only advisors.
I would expect anyone listening to a call in this capacity to flag, as a priority, anything that seemed like and advisor saying something harmful, be that because it was false, displayed an inappropriate level of confidence, or because it was insensitive.
My overall impression is that this happens extremely rarely, and that the bar for giving feedback about this kind of concern was (correctly) extremely low. I’m personally grateful, for example, for some feedback a colleague gave me about how my tone might have been perceived as ‘teacher-y’ on one call I did, and another case where someone flagged that they thought the advisee might have felt intimidated by the start of the conversation. In both cases, as far as I can remember, the colleague in question thought that the advisee probably hadn’t interpreted the situation in the way they were flagging, but that it was worth being careful in future. I mention this not to indicate that I never made mistakes on calls, but instead to illustrate why I think it’s unlikely that feedback would miss significant amounts of potentially harmful advice.
Advisee feedback mechanisms
There are multiple opportunities for people we’ve advised to give feedback about all aspects of the process, including specific prompts about the quality of advice they received on the call, any introductions we made, and any potential harms.
Some of these opportunities include the option for the advisee to remain anonymous, and we’re careful about accidentally collecting de-anonymising information, though no system is foolproof. As one example, we don’t give an option to remain anonymous in the feedback form we send immediately after the call (as depending on how many other calls were happening at the time, someone filling it in straight away might be easy to notice), but we do give this option in later follow-up surveys (where the timing won’t reveal identity).
In user feedback, the most common reason given by people who said 1on1 caused them harm is that they were rejected from advising and felt bad/demotivated about that. The absolute numbers here are very low, but there’s an obvious caveat about non-response bias.
On specific investigations/examples
I worked with community health on some ways of preventing harm being done by people advisers made introductions to (including, in some cases, stopping introductions)
I spent more than 5 but less than 10 hours, on two occasions, investigating concerns that had been raised to me about (current or former) advisors, and feel satisfied in both cases that our response was appropriate i.e. that there was not an ongoing risk of harm following the investigation.
Despite my personal bar for taking concerns of this sort seriously being pretty low compared to my guess at the community average (likely because I developed a lot of my intuitions for how to manage such situations during my previous career as a teacher), there were few enough incidents meriting any kind of investigation that I think giving any more details than the above would not be worth the (small) risk of deanonymising those involved. I take promises of confidentiality really seriously (as I hope would be expected for someone in the position advisors have).
Thanks for this in-depth response, it makes me feel more confident in the processes for the period of time when you were at 80K.
However, since you have left the team, it would be helpful to know which of these practices your successor will keep in place and how much they will change—for example, since you mentioned you were on the high end for giving feedback on calls, for example.
My understanding of many meta EA orgs is that individuals have a fair amount of autonomy. This definitely has its upsides, but it also means that practices can change (substantially) between managers.
I wouldn’t expect the attitude of the team to have shifted much in my absence. I learned a huge amount from Michelle, who’s still leading the team, especially about management. To the extent you were impressed with my answers, I think she should take a large amount of the credit.
On feedback specifically, I’ve retained a small (voluntary) advisory role at 80k, and continue to give feedback as part of that, though I also think that the advisors have been deliberately giving more to each other.
The work I mentioned on how we make introductions to others and track the effects of those, including collaborating with CH, was passed on to someone else a couple of months before I left, and in my view the robustness of those processes has improved substantially as a result.
I have seen way too many people not wanting to apply to 80K hours calls because they aren’t EAs or won’t to work in x-risk areas. It almost seems like the message is “80K is an EA-aligned only service.”
How is the team approaching this (changes in messaging, for eg?)
Openness to working in existential risk mitigation is not a strict requirement for having a call with us, but it is our top priority and the broad area we know and think most about. EA identity is not-at-all a requirement outside the very broad bounds of wanting to do good and being scope sensitive with regard to that good. Accordingly, I think it’s worth the 10 minutes to apply if you’ve 1) read/listened to some 80k content and found it interesting, and 2) have some genuine uncertainty about your long run career. I think 1) + 2) describe a broad enough range of people that I’m not worried about our potential user base being too small.
So, depending on how you define EA, I might be fine with our current messaging. If people think you need to be a multiple-EAG attendee who wears the heart-lightbulb shirt all the time to get a call, that would be a problem and I’d be interested to know what we’re doing to send that message. When I look at our web content and YouTube ads for example, I’m not worried about being too narrow.
On calls, the way I do this is not assume people are part of the EA community, and instead see what their personal mindset is when it comes to doing good.
How much would you want people weight 80k hours calls into their overall decision-making? (approximate ranges or examples is fine)
I think 80k advisors give good advice. So I hope people take it seriously but not follow it blindly.
Giving good advice is really hard, and you should seek it out from many different sources.
You also know yourself better than we do; people are unique and complicated, so if we give you advice that simply doesn’t apply to your personal situation, you should do something else. We are also flawed human beings, and sometimes make mistakes. Personally, I was miscalibrated on how hard it is to get technical AI safety roles, and I think I was overly optimistic about acceptance rates at different orgs. I feel really badly about this (my mistakes were pointed out by another advisor and I’ve since course corrected), just being explicit that we do make mistakes!
Why isn’t there an 80k chat bot?
I think a chatbot fails the cost-benefit analysis pretty badly at this point. There are big reputational hits organizations can take for giving bad advice and potential hallucinations just create a lot of surface area there. Importantly, the upside is quite minimal too. If a user wants to, they can pull up ChatGPT and ask it to act as an 80k advisor. It might do okay (or similarly to how okay it would do if we tried to develop one), only it’d be much clearer that we didn’t sanction its output.
Sudhanshu is quite keen on this, haha! I hope that at the moment our advisors are more clever and give better advice than GPT-4. But keeping my eye out for Gemini ;) Seriously though, it seems like an advising chat bot is a very big project to get right, and we don’t currently have the capacity.