Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger than the spread between problems.
I don’t have good statistics on what cause areas people are interested in when they first apply for coaching versus what we discuss on the call or what they end up pursuing. Anecdotally, if somebody applies for coaching but feels good about their role/the progress they’re making, I usually won’t strongly encourage them to work on something else. But if somebody is working on AI Safety and is burnt out, I would definitely explore other options. (Can’t speak confidently on the frequency with which this happens, sorry!) People with skills in this area will be able to contribute in a lot of different ways.
We also speak to people who did a big round of applications to AI Safety orgs, didn’t make much progress, and want to think through what to do next. In this case, we would discuss ways to invest in yourself, sometimes via more school, more industry work, or trying to have an impact in something other than AI safety.
How often do you direct someone away from AI Safety to work on something else (say global health and development)?
Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger than the spread between problems.
I don’t have good statistics on what cause areas people are interested in when they first apply for coaching versus what we discuss on the call or what they end up pursuing. Anecdotally, if somebody applies for coaching but feels good about their role/the progress they’re making, I usually won’t strongly encourage them to work on something else. But if somebody is working on AI Safety and is burnt out, I would definitely explore other options. (Can’t speak confidently on the frequency with which this happens, sorry!) People with skills in this area will be able to contribute in a lot of different ways.
We also speak to people who did a big round of applications to AI Safety orgs, didn’t make much progress, and want to think through what to do next. In this case, we would discuss ways to invest in yourself, sometimes via more school, more industry work, or trying to have an impact in something other than AI safety.
(vs how often do you direct someone away from something else to work on AI Safety)