A lot of people have gotten the message: “Direct your career towards AI Safety!” from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others’ comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!
I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.
My guess is that in a lot of cases, the root cause of negative feelings here is going to be something like perfectionism. I certainly felt disenchanted when I wasn’t able to make as much progress on AI as I would have liked. But I also felt disenchanted when I wasn’t able to make much progress on ethics, or being more conscientious, or being a better dancer. I think EA does some combination of attracting perfectionists, and exacerbating their tendencies. My colleagues have put together some great material on this, and other mental health issues:
That said, even if you have a healthy relationship with failure/rejection, feeling competent is really important for most people. If you’re feeling burnt out, I’d encourage you to explore more and focus on building aptitudes. When I felt AI research wasn’t for me, I explored research in other areas, community building, earning to give, and others. I also kept building my fundamental skills, like communication, analysis and organisation. I didn’t know where I would be applying these skills, but I knew that they’d be useful somewhere.
Hey, it’s not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it’s one that frequently came up in some form or other when I was advising), in particular, I’d recommend skimming the sections on ‘trying to have an impact right now’, ‘needing to work on AI immediately’, and ‘ignoring conventional career wisdom’.
A lot of people have gotten the message: “Direct your career towards AI Safety!” from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others’ comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!
I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.
My guess is that in a lot of cases, the root cause of negative feelings here is going to be something like perfectionism. I certainly felt disenchanted when I wasn’t able to make as much progress on AI as I would have liked. But I also felt disenchanted when I wasn’t able to make much progress on ethics, or being more conscientious, or being a better dancer. I think EA does some combination of attracting perfectionists, and exacerbating their tendencies. My colleagues have put together some great material on this, and other mental health issues:
Howie’s interview on having a successful career with depression and anxiety
Tim Lebon on how altruistic perfectionism is self-defeating
Luisa on dealing with career rejection and imposter syndrome
That said, even if you have a healthy relationship with failure/rejection, feeling competent is really important for most people. If you’re feeling burnt out, I’d encourage you to explore more and focus on building aptitudes. When I felt AI research wasn’t for me, I explored research in other areas, community building, earning to give, and others. I also kept building my fundamental skills, like communication, analysis and organisation. I didn’t know where I would be applying these skills, but I knew that they’d be useful somewhere.
Hey, it’s not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it’s one that frequently came up in some form or other when I was advising), in particular, I’d recommend skimming the sections on ‘trying to have an impact right now’, ‘needing to work on AI immediately’, and ‘ignoring conventional career wisdom’.