Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider applying to orgs, or to some alignment bootcamps. If you’re not getting any traction on applications, consider upskilling in a PhD program or industry. Some 80k advisors are more keen on independent research/taking time off to upskill; I’m not as keen on this. I would totally fail at structuring my time during an independent upskilling period, and I could see myself becoming quite isolated/anxious/depressed doing this. So I would prefer to see people pick up technical skills in a more structured way. For people who try all these things and still think they’re not making valuable progress, I would suggest a pivot into governance, support/non-technical roles at AI safety relevant orgs, or E2G. Or potentially another cause entirely!
I don’t have as many opinions about outreach strategies for getting people into AI Safety work; overall outreach seems good, but maybe the focus should be “AI risk is a problem” more than, “You should work at these specific orgs!” And there are probably a lot of ways outreach can go badly or be counterproductive, so I think a lot of caution is needed — if people disagree with your approach, try and find out why and incorporate the fact of their disagreement into your decision making.