It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!
I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it’s often a good move to try, if you have the right personal fit. I’d also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via ‘EA fellowships’ - and these routes are our top recommendations anyway; 2. are working on AI risk through governance or other non-technical routes.
I’ll add that it’s going to be the case that some people who try to work in AI technical safety won’t end up getting a job in the field. But one reason we feel very comfortable recommending it is that the career capital you build in this path is just highly valuable, including for other potentially impactful paths. For instance, you can use ML knowledge to become a valuable advisor to policymakers on AI governance issues. You could upskill in infosecurity and make that your comparative advantage. If you’re skilled as an ML engineer, one of your best options may just be earning to give for a while (provided you don’t work somewhere actively harmful) — and this also leaves open the possibility of entering AI safety work down the road if more opportunities open up. As somebody who did a psych/neuro PhD, I can confidently say that the most productive researchers in my field (and those doing the coolest research in my opinion) were people who had a background in ML, so upskilling on these technical fields just seems broadly useful.
There are many different bottlenecks in the AI Safety space. On the technical side, it has become very competitive to get a job in research labs. If technical research is what you’re aiming for, I would potentially recommend doing a PhD, or upskilling in industry. For AI governance, I think there are a ton of opportunities available. I would read through the AI Safety Fundamentals Governance class and this EA forum account to get more information on good ideas in governance and how to get started in the US government. If you’re feeling totally burnt out on AI safety, I would keep in mind that there are a huge number of ways to have a big impact on the world. Our career guide is tailored to a general audience, but every individual has different comparative advantages; if Shakira asked me if she should quit singing to upskill in ML, I would tell her she is much better placed to continue being an artist, but to use her platform to spread important messages. Not saying that you too could be a global pop sensation, but there’s probably something you could totally kick ass at, and you should potentially design your career around going hard on that. To answer your second question, we’re trying to talk to older people who can be mentors in the space and we try to connect younger people with older people outside standard orgs. We also speak to people who are considering spinning up new orgs to provide more opportunities. If this is something you’re considering doing, definitely apply to us for coaching!
I think it’s also important to highlight something from Michelle’s post on Keeping Absolutes In Mind. She’s an excellent writer, so I’ll just copy the relevant paragraph here: “For effective altruism to be successful, we need people working in a huge number of different roles – from earning to give to politics and from founding NGOs to joining the WHO. Most of us don’t know what the best career for us is. That means that we need to apply to a whole bunch of different places to find our fit. Then we need to maintain our motivation even if where we end up isn’t the place we thought would be most impactful going in. Hopefully by reminding ourselves of the absolute value of every life saved and every pain avoided we can build the kind of appreciative and supportive community that allows each of us to do our part, not miserably but cheerfully.”
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.