Hi forum,
I’m a philosophy graduate from Oxford who’s been working as a programmer for five years. I’m missing some of the required mathematical background right now, but I think I could be a good fit for AI safety research. I’m trying to figure out the next steps I should take.
Done:
Andrew Ng’s machine learning course (https://www.coursera.org/learn/machine-learning).
Decent understanding of single variable derivations.
Some basic statistics (probability distributions, Bayesian inference, confidence intervals, linear/logistic regression).
Python (not an expert but have used it full time for ~1 yr).
Applied to MLAB bootcamp, rejected (I’m a bootcamp graduate so the timed-algorithm-based application process was always a long shot).
Applied to a local computer science masters, rejected due to ‘insufficient training’ (trying to follow up and get more data).
80K 1:1 career advising. It was a really helpful experience and they suggested I take a closer look at this path.
Some local networking. I’m in Montréal and have a friend who works at MILA, he has been pointing me towards a few events and putting me in touch with some folks in the ML space.
Reading LessWrong, trying to have some of my own ideas on alignment, however simple/flawed. I’ve had the ‘narrow melt-all-GPUs AI’ thought summarized here: https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky. I’ve thought a bit about how strongly typed systems might help, ekmett’s livestreams from when he was working with MIRI are on my to-view list: https://www.twitch.tv/ekmett.
Todo:
Multivariable calculus, chain rule, backpropagation.
More statistics, e.g. enough to understand all of this ARC paper.
Aforementioned ekmett livestreams.
[...]?
I’m interested in this because my impression from struggling with papers in the space (like the ARC one above) is that the work to be done requires mathematical and statistical foundations but actually leans more towards something like philosophy in terms of methodology. I think I can pick the mathematics and statistics up, even if I do it quite a bit more slowly than a lot of the readers on here. And I enjoy the methodology of/could be pretty good at philosophy (1st class degree from Oxford).
I am looking for advice on what I should do next (and why). Some hypothetical candidate actions: “quit your job, read and implement papers x, y, z, apply to these openings” (a Daniel Ziegler style opportunity https://80000hours.org/podcast/episodes/olsson-and-ziegler-ml-engineering-and-safety/), “take these online courses”, “apply to this course that I suspect will consider applicants with your background”, “try and get a job in problem space x”. “I actually think you have misconceptions about the space and wouldn’t be a great fit” would be disappointing but also valuable feedback if applicable.
I’m in my late 20s and can relocate, retrain, etc. If your suggestion is extreme, please suggest it anyway and I can decide if I’m willing/able. Thanks all!
Hey!
80k wrote about similar things: https://forum.effectivealtruism.org/posts/gbPthwLw3NovHAJdp/new-80-000-hours-career-review-software-engineering#Working_in_AI_safety
I encourage you to talk to them since it’s something they specialize in
I hope someone doing AI safety will reply here.
My prior from people trying to enter domain that I know something about—is that they could save a ton of time if they’d get insider advice. But I don’t have the answers for AI Safety specifically
I’ll refer some friends to this post
+1 for asking publicly!
This is great advice :) Already mentioned below; however, for people in similar positions, please do consider booking a coaching call with AI Safety Support: https://www.aisafetysupport.org/. We have experience helping people navigate the AI Safety field and can also connect you to others.
Ah wow!
Would you recommend contacting you over reading this?
https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack
(Please tell me who you would or wouldn’t like me to send your way, I ongoingly talk to lots of software developers, and sometimes they want to do AI safety)
Yeah, absolutely! Happy to go through posts offering career advice, how one might implement the advice, if there are any other perspectives to consider, etc.
I would really encourage having a low-bar for sending people our way, very happy to talk to anyone! But generally, we offer coaching to those trying to get into the AI Safety field (ex. undergrads looking for research positions, software engineers or research scientists looking for work in the field, independent researchers or community-builders interested in applying for funding). Also happy to talk people through AI Safety career-related decisions (ex. whether or not to go to graduate school, choosing between positions, etc.)
I’m going to add some of this to my ‘done’ column, thanks for pointing it out.
Hi Yonatan, I actually got some 1:1 career advice from 80k recently, they were great! I’m also friends with someone in AI who’s local to Montréal and who’s trying to help me out. He works at MILA which has ties to a few universities in the city (that’s kind of what inspired the speculative master’s application). Thanks in advance for the referrals!
Consider applying for https://www.eacambridge.org/agi-safety-fundamentals
Thanks, I’m now on their mailing list!
Richard Ngo recently wrote a post on careers in AI safety.
I think you could divide AI safety careers into six categories. I’ve written some quick tentative thoughts on how you could get started, but I’m not an expert in this for sure.
Software engineering: infrastructure, building environments, etc.
Do LeetCode/NeetCode and other interview prep and get referrals to try to get a really good entry-level software engineering job. Work in software engineering for a few years, try to get really good at engineering (e.g., being able to dive into a large, unfamiliar codebase and submit a significant pull request within a few weeks). Maybe learn in-demand skills like parallel computing, data engineering, information security, etc. Then, try to get into a software engineering role at Anthropic, Redwood Research, etc. Anthropic is generally looking for fairly experienced engineers, as they aren’t able to provide enough mentorship at this stage for new engineers.
ML implementation: converting a research idea into a working model.
Take an ML course (you can apply for a grant from the Long-Term Future Fund if necessary), especially in deep natural language processing or reinforcement learning, reproduce some ML papers, maybe do a master’s in ML if you want, apply for ML jobs at Redwood or Anthropic.
ML research direction: coming up with good ideas, designing experiments.
Maybe do a PhD in machine learning, apply to CHAI or DeepMind or OpenAI? But I’ve heard that a PhD takes way too long and many AI safety orgs aren’t that credentialist. I have no idea what I’m talking about here.
Theory research: building good abstractions, mathematical reasoning.
Go through the AGI Safety Fundamentals technical alignment program or dive deep into alignment research that seems interesting to you. Think about the Eliciting Latent Knowledge problem and Richard Ngo’s Alignment research exercises, and maybe apply for a grant from the Long-Term Future Fund to do independent research.
AI policy
I’m not that familiar with this, but I think you could start with the AGI Safety Fundamentals governance program
Non-technical roles in AI safety orgs such as Redwood Research. I’m also personally excited about AI safety field-building at top universities, something like EA movement-building at universities, based on the experience of EA at Georgia Tech, OxAI Safety Hub, EA NYU, and AI Safety @ MIT this semester.
Again, check out Richard Ngo’s post on careers in AI safety, and apply for relevant internships/residencies. AI jobs that aren’t related to safety can still be helpful for gaining experience so you can transition to safety work.
AI Safety Support (https://www.aisafetysupport.org/) would be an excellent place to reach out to in my opinion. (I am not in AI Safety myself, but recently spoke to them about getting into it and they offered some excellent advice) They offer help for people looking to get started in AI Safety, and can offer that insider perspective that Yonatan refers to in Point 3 of his answer.
Done, thank you!