Thanks for this post and for helping run this project! As we’ve discussed, I think this is a valuable effort.
I wanted to mention a few things:
I agree that nuclear risk work can have useful benefits for testing fit and building career capital for work in other areas, including AI governance. I also agree that there will be some people for whom nuclear risk related projects/jobs are the best next step even if their primary goal is to ultimately work on AI governance. But I also think there are many paths into AI governance that are more direct, and some are very open to early-career people, and people who ultimately want to work on AI governance should probably apply for these things as well.
Two key points there are that applying is often pretty quick and that assessing one’s own fit for roles is often quite hard. So it’s often better to be empirical by just trying and seeing what happens, rather than self-selecting out due to assuming the roles require more experience, skill, etc. than one has.
I’d guess a lot of people should simply apply for both the CERI nuclear risk stream and for AI-gov-specific opportunities
Relevant opportunities include or can be found at:
[Disclaimer-or-something: Will and I collaborate on various things, but I’m not formally involved in CERI and didn’t review this specific post. Also, I work at Rethink Priorities and am a grantmaker on the EAIF, but I wrote this comment in a personal capacity, not as a representative of any org.]
Many thanks for this comment, especially the part below, which I was aware of but forgot to include, and which I’ve now incorporated into the main text of my post.
Thanks for this post and for helping run this project! As we’ve discussed, I think this is a valuable effort.
I wanted to mention a few things:
I agree that nuclear risk work can have useful benefits for testing fit and building career capital for work in other areas, including AI governance. I also agree that there will be some people for whom nuclear risk related projects/jobs are the best next step even if their primary goal is to ultimately work on AI governance. But I also think there are many paths into AI governance that are more direct, and some are very open to early-career people, and people who ultimately want to work on AI governance should probably apply for these things as well.
Two key points there are that applying is often pretty quick and that assessing one’s own fit for roles is often quite hard. So it’s often better to be empirical by just trying and seeing what happens, rather than self-selecting out due to assuming the roles require more experience, skill, etc. than one has.
I’d guess a lot of people should simply apply for both the CERI nuclear risk stream and for AI-gov-specific opportunities
Relevant opportunities include or can be found at:
https://www.governance.ai/opportunities/fellowships
Rethink Priorities’ Research Assistant — AI Governance and Strategy and Research Fellow — AI Governance and Strategy roles
AI governance streams/projects with CERI, SERI, CHERI, or similar programs
https://80000hours.org/job-board/ai-safety-policy/
List of EA funding opportunities
Regarding “Who else is working on the problem?”, people might also find useful the “nuclear risk” “view” of my Database of orgs relevant to longtermist/x-risk work
This is far from comprehensive, but still lists a fair few orgs and some basic info on them
I’ve now also published 8 possible high-level goals for work on nuclear risk, which is quite relevant to some parts of this post
[Disclaimer-or-something: Will and I collaborate on various things, but I’m not formally involved in CERI and didn’t review this specific post. Also, I work at Rethink Priorities and am a grantmaker on the EAIF, but I wrote this comment in a personal capacity, not as a representative of any org.]
Many thanks for this comment, especially the part below, which I was aware of but forgot to include, and which I’ve now incorporated into the main text of my post.