What is your thinking on how people should think about their intelligence when it comes to pursuing careers in AI safety? Also, what do you think about this in terms of field building?
I think that there are a lot of people who are “smart” but may not be super-geniuses like the next von Neumann or Einstein who might be interested in pursuing AI safety work, but are uncertain about how much impact they can really have. In particular, I can envision cases where one might enjoy thinking about thought experiments, reading research on the AI Alignment Forum, writing their own arguments, etc, but they might not be making valuable output for a year or more. (At the same time, I know there are cases where someone could become really productive whilst taking a year or more to reach this point.) What advice would you give to this kind of person in thinking about career choice? I am also curious how you think about outreach strategies for getting people into AI safety work. For example, the balance between trying to get the word out as much as possible and keeping outreach to lower scales so that only people who are really capable would be likely of learning about careers in AI safety.
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.
It’s not a full answer but I think the section of my discussion with Luisa Rodriguez on ‘not trying hard enough to fail’ might be interesting to read/listen to if you’re wondering about this.
What is your thinking on how people should think about their intelligence when it comes to pursuing careers in AI safety? Also, what do you think about this in terms of field building?
I think that there are a lot of people who are “smart” but may not be super-geniuses like the next von Neumann or Einstein who might be interested in pursuing AI safety work, but are uncertain about how much impact they can really have. In particular, I can envision cases where one might enjoy thinking about thought experiments, reading research on the AI Alignment Forum, writing their own arguments, etc, but they might not be making valuable output for a year or more. (At the same time, I know there are cases where someone could become really productive whilst taking a year or more to reach this point.) What advice would you give to this kind of person in thinking about career choice? I am also curious how you think about outreach strategies for getting people into AI safety work. For example, the balance between trying to get the word out as much as possible and keeping outreach to lower scales so that only people who are really capable would be likely of learning about careers in AI safety.
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.
It’s not a full answer but I think the section of my discussion with Luisa Rodriguez on ‘not trying hard enough to fail’ might be interesting to read/listen to if you’re wondering about this.