Quick context: I’m a philosophy graduate aiming to transition into AI governance/policy research or AI safety advocacy. As part of this path, I’m planning to work at for-profit companies to build experience and financial stability during the transition, and I’m seeking advice on which for-profit roles can best build relevant skills.
My question is: what kinds of roles (especially outside of obvious research positions) are valuable stepping stones toward AI governance/policy research? I don’t yet have direct research experience, so I’m particularly interested in roles that are more accessible early on but still help me develop transferable skills, especially those that might not be intuitive at first glance.
My secondary interest is in AI safety advocacy. Are there particular entry-level or for-profit roles that could serve as strong preparation for future advocacy or field-building work?
A bit about me: – I have a strong analytical and critical thinking background from my philosophy BA, including structured and clear writing experience – I’m deeply engaged with the AI safety space: I’ve completed BlueDot’s AI Governance course, volunteered with AI Safety Türkiye, and regularly read and discuss developments in the field – I’m curious, organized, and enjoy operations work, in addition to research and strategy
If you’ve navigated a similar path, have ideas about stepping-stone roles, or just want to chat, I’d be happy to chat over a call as well! Feel free to schedule a 20-min conversation here.
Thank you for your questions. The key question to ask yourself is in how far a for-profit role can best set you up for any future roles that you listed? Although for-profit can support you with building more financial stability, it could be that there is a more direct way to build the career capital for your desired role.
Have you looked at the 80k job board (https://jobs.80000hours.org/) and could you identify some roles that you find interesting? Try to also include non-profit roles at first and try to figure out in howfar they match your salary expectations.
If it turns out that for-profit is the only route for you to go at the moment, you can look for consulting, project/product management, communications, operations jobs that overlap with safe/responsible AI development. On the for-profit side there is also a whole industry emerging around supporting organizations understand policy implications, and ensuring compliance with upcoming AI regulations.
If you feel you are still at the beginning of your ecosystem research, it might help you to look into most recent developments on the policy side (e.g. EU AI Act) or and other outputs from leading organizations (papers, etc.). Ask yourself: What sparks my interest here and where do I see myself fitting in?
Quick context: I’m a philosophy graduate aiming to transition into AI governance/policy research or AI safety advocacy. As part of this path, I’m planning to work at for-profit companies to build experience and financial stability during the transition, and I’m seeking advice on which for-profit roles can best build relevant skills.
My question is: what kinds of roles (especially outside of obvious research positions) are valuable stepping stones toward AI governance/policy research? I don’t yet have direct research experience, so I’m particularly interested in roles that are more accessible early on but still help me develop transferable skills, especially those that might not be intuitive at first glance.
My secondary interest is in AI safety advocacy. Are there particular entry-level or for-profit roles that could serve as strong preparation for future advocacy or field-building work?
A bit about me:
– I have a strong analytical and critical thinking background from my philosophy BA, including structured and clear writing experience
– I’m deeply engaged with the AI safety space: I’ve completed BlueDot’s AI Governance course, volunteered with AI Safety Türkiye, and regularly read and discuss developments in the field
– I’m curious, organized, and enjoy operations work, in addition to research and strategy
If you’ve navigated a similar path, have ideas about stepping-stone roles, or just want to chat, I’d be happy to chat over a call as well! Feel free to schedule a 20-min conversation here.
Thanks in advance for any pointers!
Hi İrem,
Thank you for your questions. The key question to ask yourself is in how far a for-profit role can best set you up for any future roles that you listed? Although for-profit can support you with building more financial stability, it could be that there is a more direct way to build the career capital for your desired role.
Have you looked at the 80k job board (https://jobs.80000hours.org/) and could you identify some roles that you find interesting? Try to also include non-profit roles at first and try to figure out in howfar they match your salary expectations.
If it turns out that for-profit is the only route for you to go at the moment, you can look for consulting, project/product management, communications, operations jobs that overlap with safe/responsible AI development. On the for-profit side there is also a whole industry emerging around supporting organizations understand policy implications, and ensuring compliance with upcoming AI regulations.
If you feel you are still at the beginning of your ecosystem research, it might help you to look into most recent developments on the policy side (e.g. EU AI Act) or and other outputs from leading organizations (papers, etc.). Ask yourself: What sparks my interest here and where do I see myself fitting in?
Maybe this post from yesterday is also intersting for you: https://forum.effectivealtruism.org/posts/D6ECZQMFcskpiGP7Q/ten-ai-safety-proj
Hope some of that is helpful!
All the best,
Simon (Career Advisor @ Successif)