Project Management (PM) is a common role in the tech industry. However, I cannot find much information about this role in the AI safety field other than this earn-to-give focused 80k career review for product managment, and this really short 80k career review for research management.
Is PM a common role in AI safety? Does it differ in that most AI safety work is research focused, while most industry work is product focused? Do AI safety organizations look for PMs with the greatest technical ability, or best people/management skills?
I see two new relevant roles on the 80,000 Hours job board right now:
Anthropic—Interpretability Team Manager
OpenAI—Product Manager, Applied Safety
Note that I’m not sure this is what you have in mind for AI safety; this role seems to be focused on developing and enforcing usage guidelines of products like DALL-E 2, Copilot, and GPT-3.
Here’s an excerpt from Anthropic’s job posting. It’s looking for basic familiarity with deep learning and mechanistic interpretability, but mostly nontechnical skills.