I think you could divide AI safety careers into six categories. I’ve written some quick tentative thoughts on how you could get started, but I’m not an expert in this for sure.
Do LeetCode/NeetCode and other interview prep and get referrals to try to get a really good entry-level software engineering job. Work in software engineering for a few years, try to get really good at engineering (e.g., being able to dive into a large, unfamiliar codebase and submit a significant pull request within a few weeks). Maybe learn in-demand skills like parallel computing, data engineering, information security, etc. Then, try to get into a software engineering role at Anthropic, Redwood Research, etc. Anthropic is generally looking for fairly experienced engineers, as they aren’t able to provide enough mentorship at this stage for new engineers.
ML implementation: converting a research idea into a working model.
Take an ML course (you can apply for a grant from the Long-Term Future Fund if necessary), especially in deep natural language processing or reinforcement learning, reproduce some ML papers, maybe do a master’s in ML if you want, apply for ML jobs at Redwood or Anthropic.
ML research direction: coming up with good ideas, designing experiments.
Maybe do a PhD in machine learning, apply to CHAI or DeepMind or OpenAI? But I’ve heard that a PhD takes way too long and many AI safety orgs aren’t that credentialist. I have no idea what I’m talking about here.
Theory research: building good abstractions, mathematical reasoning.
Go through the AGI Safety Fundamentals technical alignment program or dive deep into alignment research that seems interesting to you. Think about the Eliciting Latent Knowledge problem and Richard Ngo’s Alignment research exercises, and maybe apply for a grant from the Long-Term Future Fund to do independent research.
I’m not that familiar with this, but I think you could start with the AGI Safety Fundamentals governance program
Non-technical roles in AI safety orgs such as Redwood Research. I’m also personally excited about AI safety field-building at top universities, something like EA movement-building at universities, based on the experience of EA at Georgia Tech, OxAI Safety Hub, EA NYU, and AI Safety @ MIT this semester.
Again, check out Richard Ngo’s post on careers in AI safety, and apply for relevant internships/residencies. AI jobs that aren’t related to safety can still be helpful for gaining experience so you can transition to safety work.
Richard Ngo recently wrote a post on careers in AI safety.
I think you could divide AI safety careers into six categories. I’ve written some quick tentative thoughts on how you could get started, but I’m not an expert in this for sure.
Software engineering: infrastructure, building environments, etc.
Do LeetCode/NeetCode and other interview prep and get referrals to try to get a really good entry-level software engineering job. Work in software engineering for a few years, try to get really good at engineering (e.g., being able to dive into a large, unfamiliar codebase and submit a significant pull request within a few weeks). Maybe learn in-demand skills like parallel computing, data engineering, information security, etc. Then, try to get into a software engineering role at Anthropic, Redwood Research, etc. Anthropic is generally looking for fairly experienced engineers, as they aren’t able to provide enough mentorship at this stage for new engineers.
ML implementation: converting a research idea into a working model.
Take an ML course (you can apply for a grant from the Long-Term Future Fund if necessary), especially in deep natural language processing or reinforcement learning, reproduce some ML papers, maybe do a master’s in ML if you want, apply for ML jobs at Redwood or Anthropic.
ML research direction: coming up with good ideas, designing experiments.
Maybe do a PhD in machine learning, apply to CHAI or DeepMind or OpenAI? But I’ve heard that a PhD takes way too long and many AI safety orgs aren’t that credentialist. I have no idea what I’m talking about here.
Theory research: building good abstractions, mathematical reasoning.
Go through the AGI Safety Fundamentals technical alignment program or dive deep into alignment research that seems interesting to you. Think about the Eliciting Latent Knowledge problem and Richard Ngo’s Alignment research exercises, and maybe apply for a grant from the Long-Term Future Fund to do independent research.
AI policy
I’m not that familiar with this, but I think you could start with the AGI Safety Fundamentals governance program
Non-technical roles in AI safety orgs such as Redwood Research. I’m also personally excited about AI safety field-building at top universities, something like EA movement-building at universities, based on the experience of EA at Georgia Tech, OxAI Safety Hub, EA NYU, and AI Safety @ MIT this semester.
Again, check out Richard Ngo’s post on careers in AI safety, and apply for relevant internships/residencies. AI jobs that aren’t related to safety can still be helpful for gaining experience so you can transition to safety work.