Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month
Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/Brian Christian, watching Rob Miles’ videos etc
Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nanda’s excellent Simplify EA Pitches to “Holy Shit, X-Risk” and Scott Alexander’s “Long-Termism” vs “Existential Risk” (I’d not spent much time considering philosophy before engaging with EA and haven’t had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably don’t need to make a decision about that yet and can focus on alignment knowing that it’s likely the highest impact cause I can work on).
Began cold-emailing AI safety folks to see if I can get them to give me any advice
Signed up to some newsletters, joined the AI alignment Slack group
I plan on taking a few more concrete steps:
Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available.
In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact
My first goal is to ascertain whether or not I’d be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and I’m a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem promising:
Applying for relevant internships. A lot of these seem aimed at current students, but I’m hoping I can find some that would be suitable for me.
Getting an interim job that primarily uses python and ideally ML so I can upskill in those (at the moment my skills are more generic backend API development focused), even if the job isn’t focused on safety.
Apply for a grant to self-study for 3-6 months, ideally under the guidance of a mentor, with a view to building a portfolio that would enable me to get a job somewhere like Deepmind.
Applying for research associate positions focused on AI alignment.
I appreciate there’s little context to my current situation which might be relevant here, but nonetheless any feedback on these would be greatly appreciated!
Nice, I’d also recommend considering applying for the next round of the AGI Safety Fundamentals course. To be honest, I don’t have much else I can recommend, as it seems like you’ve already got a pretty solid plan.
What are you thinking about regarding next steps to become more involved with AI Safety?
I’ve taken a few concrete steps:
Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month
Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/Brian Christian, watching Rob Miles’ videos etc
Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nanda’s excellent Simplify EA Pitches to “Holy Shit, X-Risk” and Scott Alexander’s “Long-Termism” vs “Existential Risk” (I’d not spent much time considering philosophy before engaging with EA and haven’t had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably don’t need to make a decision about that yet and can focus on alignment knowing that it’s likely the highest impact cause I can work on).
Began cold-emailing AI safety folks to see if I can get them to give me any advice
Signed up to some newsletters, joined the AI alignment Slack group
I plan on taking a few more concrete steps:
Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available.
In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact
Getting in contact with the folks at AI Safety Support
Complete the deep learning for coders fast.ai course
My first goal is to ascertain whether or not I’d be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and I’m a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem promising:
Applying for relevant internships. A lot of these seem aimed at current students, but I’m hoping I can find some that would be suitable for me.
Getting an interim job that primarily uses python and ideally ML so I can upskill in those (at the moment my skills are more generic backend API development focused), even if the job isn’t focused on safety.
Apply for a grant to self-study for 3-6 months, ideally under the guidance of a mentor, with a view to building a portfolio that would enable me to get a job somewhere like Deepmind.
Applying for research associate positions focused on AI alignment.
I appreciate there’s little context to my current situation which might be relevant here, but nonetheless any feedback on these would be greatly appreciated!
Nice, I’d also recommend considering applying for the next round of the AGI Safety Fundamentals course. To be honest, I don’t have much else I can recommend, as it seems like you’ve already got a pretty solid plan.