Thanks for your question! Your background in math, software development, and strategic thinking—as well as familiarity with history and politics—may actually be quite relevant for AI governance strategy work, especially in technical policy roles that bridge the gap between research and implementation.
Without knowing specifics of your career profile (e.g. years of experience, location, etc), some very general direct roles for reducing AI risk:
AI Policy Research: at think tanks, developing concrete policy proposals
Government roles: Congressional staff or agency positions working on AI strategy
Industry governance: Policy roles at AI companies working on safety standards and internal governance
Indirect but valuable:
Technical communication: Translating AI research for policymakers (your flash writing background helps)
Strategy consulting: Helping organizations develop AI risk mitigation approaches
Thanks for the reply Moneer, these are great ideas!
For more context, I live in the North East USA. Software-development-wise I’m at the junior dev level, so I’m early career.
AI Policy Research seems like a good idea to explore. Maybe these questions are answered in some of the reading you suggested which I haven’t had the chance to check out yet. Does that field need more people? In other words, is it more of a zero sum field where getting in means someone else doesn’t or is it more like I will I just add to the field? What sort of education is recommended: level? Subjects? Do you have any book recommendations for this field?
I’d love to attend a conference or summit (applied to EAG NYC but did not get in) but money is always an issue.
Thanks for your question! Your background in math, software development, and strategic thinking—as well as familiarity with history and politics—may actually be quite relevant for AI governance strategy work, especially in technical policy roles that bridge the gap between research and implementation.
Without knowing specifics of your career profile (e.g. years of experience, location, etc), some very general direct roles for reducing AI risk:
AI Policy Research: at think tanks, developing concrete policy proposals
Government roles: Congressional staff or agency positions working on AI strategy
Industry governance: Policy roles at AI companies working on safety standards and internal governance
Indirect but valuable:
Technical communication: Translating AI research for policymakers (your flash writing background helps)
Strategy consulting: Helping organizations develop AI risk mitigation approaches
Resources to explore:
80,000 Hours AI Governance and Policy guide (80,000 Hours) - comprehensive overview of the field
Emerging Tech Policy Careers website (Horizon Institute for Public Policy) - specific pathways and opportunities
AI Governance Subfields guide (Damen Curtis)- overview of different specializations within the field
Next steps:
Are there insights/strategies from our recently published article that you see worth adopting into action?
Also, would you consider attending upcoming events to learn more about roles/people/orgs in the space, such as Zurich AI Safety Day and EA conferences/summits?
Finally, apart from reading available resources, consider applying to Successif’s services for tailored career advising/coaching.
All the best,
Moneer (Career Advisor at Successif)
Thanks for the reply Moneer, these are great ideas!
For more context, I live in the North East USA. Software-development-wise I’m at the junior dev level, so I’m early career.
AI Policy Research seems like a good idea to explore. Maybe these questions are answered in some of the reading you suggested which I haven’t had the chance to check out yet. Does that field need more people? In other words, is it more of a zero sum field where getting in means someone else doesn’t or is it more like I will I just add to the field? What sort of education is recommended: level? Subjects? Do you have any book recommendations for this field?
I’d love to attend a conference or summit (applied to EAG NYC but did not get in) but money is always an issue.