As an 80k advisor, my ToC is “Try and help someone to do something more impactful than if they had not spoken to me.”
Mainly, this is helping get people more familiar with/excited about/doing things related to AI safety. It’s also about helping them with resources and sometimes warm introductions to people who can help them even more.
Are there any particular pipelines / recommended programs for control research?
Just the things you probably already know about – MATS, Astra are likely your best bets, but look through these papers to see if there are any low hanging fruit as future work
What are the most neglected areas of work in the AIS space?
Hard question, with many opinions! I’m particularly concerned that “making illegible problems legible” is neglected. See Wei Dai’s writing about this
More groundedly, I’m concerned we’re not doing enough work on Gradual Disempowerment and more broadly questions of {how to have a flourishing future/what is a flourishing future} even if we avoid catastrophic risks
In general, AI safety work needs to contend with a collection of subproblems. See davidad’s opinion – A list of core AI safety problems
There are many other such opinions, and it’s good to scan through them to work out how they’re all connected, so that you can see the forest for the trees; and also to work out which problems you’re drawn to/compelled by, and seek out what’s neglected within those 🙂
Some questions about ops-roles:
What metrics should I use to evaluate my performance in ops/fieldbuilding roles? I find ops to be really scattered and messy, and so it’s hard to point to consistent metrics.
Hard to talk about this in concrete terms, because ops is so varied; every task can have its own set of metrics. Instead, think through this strategically:
Be clear on the theory(ies) of change, and your roles/activities/tasks in it(them). Once you can articulate those things, the metrics worth measuring become a lot clearer
Sometimes we’re not tracking impact because impact evaluation is notoriously difficult. Look for proxies. Red-team them with people you admire
Fieldbuilding metrics can be easier to generate, but I don’t claim to be an expert here – ask folks at BlueDot, or the fellowships for better input.
How many people completed the readings?
How many people did I get to sign up for the bluedot course?
How many of those finished the bluedot course?
How many people did I get into an Apart Hackathon?
Did any of my people win?
And so on…
Likewise, I have a hard time discerning what “ops” really means. What are the best tangible “ops” skills I should go out of my way to skill up on if I want to work in the field building/programmes space? Are there “hard” ops skills I should become really good at (like, familiarity with certain software programmes, etc)
Ops is usually a “get stuff done” bucket of work. Yes, it can help to have functional experience in an ops domain like “Finance” or “IT/office tech infra/website” (and especially “Legal”), but a LOT of ops can be learned on the job/on your own; AI safety is stacked full of folks who didn’t let “I don’t know anything about ops” stop them from figuring it out and getting it done
Under what circumstances should a “technical person” consider switching their career to fieldbuilding?
First things first:
Fieldbuilding is not a consolation prize. Do fieldbuilding if you’re really passionate about helping AI go well, and fieldbuilding is your comparative advantage.
And doubling down on that:
It really really really helps if fieldbuilders are very competent. A fieldbuilder who doesn’t know their shit about AI risk and AI safety can propagate bad ideas among the people they’re inducting into the field.
This can have incredibly high costs
Pollutes the commons
Wastes time downstream where all this would need to be corrected
Bounces people who might be able to quickly get up to speed, because their initial contact with these fieldbuilders is of poor quality, poor argumentation, poor epistemics
Conversely a great fieldbuilder is one who knows how to tend their flock, what they need to prosper and grow to become competent at thinking about AI safety properly, and being able to do AI safety things
How would you recommend going about doing independent project work for upskilling in-place of doing something like SPAR or MATS?
Why not both? In general, I want people to ask themselves this question when making decisions. You can do a lot more than you give yourself credit for.
At the current margins SPAR, MATS etc. are probably better than independent work
Some of these fellowships have pretty high signal to employers (based on evidence that has been generated over time)
There is a lot that these fellowships offer that are sometimes hard to get without them
Research support, mentorship, community engagement, well-scoped projects with deliverables and accountability
Also softer things like physical space , some money
But if you’re great at doing stuff independently, go for it! Neel Nanda didn’t need a fellowship.
A key idea is to keep your eye on the ball – be productive!
The point is generate outputs
That make you learn
That show that you have learned
That are related to AI safety
That get feedback
That show that you update based on (relevant/good/high-quality) feedback
Got sent a set of questions from ARBOx to handle async; thought I’d post my answers publicly:
Can you explain more about mundane utility? How do you find these opportunities?
Lots of projects need people and help! E.g. Can you contribute to EleutherAI, or close issues in Neuronpedia? Some more ideas:
Contribute to the projects within SK’s github follows and stars
Make some contributions within Big list of lists of AI safety project ideas 2025
Reach out to projects that you think are doing cool work and ask if you can help!
BlueDot ideas for SWEs
I’m an experienced software engineer. How can I contribute to AI safety?
The software engineer’s guide to making your first AI safety contribution in <1 week
From a non-coding perspective, you could e.g.
Facilitate BlueDot courses
Give people feedback on their research proposals, drafts, etc.
Be accountability partners
Offer to talk to people and share what you know with those who know less than you
Check out these pieces from my colleagues:
How to have an impact when the job market is not cooperating by Laura G Salmeron
Your Goal Isn’t Really to Get a Job by Matt Beard
What is your theory of change?
As an 80k advisor, my ToC is “Try and help someone to do something more impactful than if they had not spoken to me.”
Mainly, this is helping get people more familiar with/excited about/doing things related to AI safety. It’s also about helping them with resources and sometimes warm introductions to people who can help them even more.
Are there any particular pipelines / recommended programs for control research?
Just the things you probably already know about – MATS, Astra are likely your best bets, but look through these papers to see if there are any low hanging fruit as future work
What are the most neglected areas of work in the AIS space?
Hard question, with many opinions! I’m particularly concerned that “making illegible problems legible” is neglected. See Wei Dai’s writing about this
Legible vs. Illegible AI Safety Problems
Problems I’ve Tried to Legibilize
More groundedly, I’m concerned we’re not doing enough work on Gradual Disempowerment and more broadly questions of {how to have a flourishing future/what is a flourishing future} even if we avoid catastrophic risks
In general, AI safety work needs to contend with a collection of subproblems. See davidad’s opinion – A list of core AI safety problems
There are many other such opinions, and it’s good to scan through them to work out how they’re all connected, so that you can see the forest for the trees; and also to work out which problems you’re drawn to/compelled by, and seek out what’s neglected within those 🙂
Some questions about ops-roles:
What metrics should I use to evaluate my performance in ops/fieldbuilding roles? I find ops to be really scattered and messy, and so it’s hard to point to consistent metrics.
Hard to talk about this in concrete terms, because ops is so varied; every task can have its own set of metrics. Instead, think through this strategically:
Be clear on the theory(ies) of change, and your roles/activities/tasks in it(them). Once you can articulate those things, the metrics worth measuring become a lot clearer
Sometimes we’re not tracking impact because impact evaluation is notoriously difficult. Look for proxies. Red-team them with people you admire
Fieldbuilding metrics can be easier to generate, but I don’t claim to be an expert here – ask folks at BlueDot, or the fellowships for better input.
How many people completed the readings?
How many people did I get to sign up for the bluedot course?
How many of those finished the bluedot course?
How many people did I get into an Apart Hackathon?
Did any of my people win?
And so on…
Likewise, I have a hard time discerning what “ops” really means. What are the best tangible “ops” skills I should go out of my way to skill up on if I want to work in the field building/programmes space? Are there “hard” ops skills I should become really good at (like, familiarity with certain software programmes, etc)
Ops is usually a “get stuff done” bucket of work. Yes, it can help to have functional experience in an ops domain like “Finance” or “IT/office tech infra/website” (and especially “Legal”), but a LOT of ops can be learned on the job/on your own; AI safety is stacked full of folks who didn’t let “I don’t know anything about ops” stop them from figuring it out and getting it done
Under what circumstances should a “technical person” consider switching their career to fieldbuilding?
First things first:
Fieldbuilding is not a consolation prize. Do fieldbuilding if you’re really passionate about helping AI go well, and fieldbuilding is your comparative advantage.
And doubling down on that:
It really really really helps if fieldbuilders are very competent. A fieldbuilder who doesn’t know their shit about AI risk and AI safety can propagate bad ideas among the people they’re inducting into the field.
This can have incredibly high costs
Pollutes the commons
Wastes time downstream where all this would need to be corrected
Bounces people who might be able to quickly get up to speed, because their initial contact with these fieldbuilders is of poor quality, poor argumentation, poor epistemics
Conversely a great fieldbuilder is one who knows how to tend their flock, what they need to prosper and grow to become competent at thinking about AI safety properly, and being able to do AI safety things
How would you recommend going about doing independent project work for upskilling in-place of doing something like SPAR or MATS?
Why not both? In general, I want people to ask themselves this question when making decisions. You can do a lot more than you give yourself credit for.
At the current margins SPAR, MATS etc. are probably better than independent work
Some of these fellowships have pretty high signal to employers (based on evidence that has been generated over time)
There is a lot that these fellowships offer that are sometimes hard to get without them
Research support, mentorship, community engagement, well-scoped projects with deliverables and accountability
Also softer things like physical space , some money
But if you’re great at doing stuff independently, go for it! Neel Nanda didn’t need a fellowship.
A key idea is to keep your eye on the ball – be productive!
The point is generate outputs
That make you learn
That show that you have learned
That are related to AI safety
That get feedback
That show that you update based on (relevant/good/high-quality) feedback
lfg!