Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month
Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/āBrian Christian, watching Rob Milesā videos etc
Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nandaās excellent Simplify EA Pitches to āHoly Shit, X-Riskā and Scott Alexanderās āLong-Termismā vs āExistential Riskā (Iād not spent much time considering philosophy before engaging with EA and havenāt had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably donāt need to make a decision about that yet and can focus on alignment knowing that itās likely the highest impact cause I can work on).
Began cold-emailing AI safety folks to see if I can get them to give me any advice
Signed up to some newsletters, joined the AI alignment Slack group
I plan on taking a few more concrete steps:
Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available.
In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact
My first goal is to ascertain whether or not Iād be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and Iām a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem promising:
Applying for relevant internships. A lot of these seem aimed at current students, but Iām hoping I can find some that would be suitable for me.
Getting an interim job that primarily uses python and ideally ML so I can upskill in those (at the moment my skills are more generic backend API development focused), even if the job isnāt focused on safety.
Apply for a grant to self-study for 3-6 months, ideally under the guidance of a mentor, with a view to building a portfolio that would enable me to get a job somewhere like Deepmind.
Applying for research associate positions focused on AI alignment.
I appreciate thereās little context to my current situation which might be relevant here, but nonetheless any feedback on these would be greatly appreciated!
Nice, Iād also recommend considering applying for the next round of the AGI Safety Fundamentals course. To be honest, I donāt have much else I can recommend, as it seems like youāve already got a pretty solid plan.
Iāve taken a few concrete steps:
Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month
Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/āBrian Christian, watching Rob Milesā videos etc
Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nandaās excellent Simplify EA Pitches to āHoly Shit, X-Riskā and Scott Alexanderās āLong-Termismā vs āExistential Riskā (Iād not spent much time considering philosophy before engaging with EA and havenāt had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably donāt need to make a decision about that yet and can focus on alignment knowing that itās likely the highest impact cause I can work on).
Began cold-emailing AI safety folks to see if I can get them to give me any advice
Signed up to some newsletters, joined the AI alignment Slack group
I plan on taking a few more concrete steps:
Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available.
In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact
Getting in contact with the folks at AI Safety Support
Complete the deep learning for coders fast.ai course
My first goal is to ascertain whether or not Iād be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and Iām a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem promising:
Applying for relevant internships. A lot of these seem aimed at current students, but Iām hoping I can find some that would be suitable for me.
Getting an interim job that primarily uses python and ideally ML so I can upskill in those (at the moment my skills are more generic backend API development focused), even if the job isnāt focused on safety.
Apply for a grant to self-study for 3-6 months, ideally under the guidance of a mentor, with a view to building a portfolio that would enable me to get a job somewhere like Deepmind.
Applying for research associate positions focused on AI alignment.
I appreciate thereās little context to my current situation which might be relevant here, but nonetheless any feedback on these would be greatly appreciated!
Nice, Iād also recommend considering applying for the next round of the AGI Safety Fundamentals course. To be honest, I donāt have much else I can recommend, as it seems like youāve already got a pretty solid plan.