As a newcomer and CS undergrad student, I’m considering pursuing a career in safety research. I’d like to know how deeply I need to understand the core concepts of math (linear algebra, probability), machine learning (model training, loss functions, evaluation), and programming (testing, version control, reproducible workflows) to meaningfully contribute to a research project. I’d also like to know how can I succeed as a largely self-directed learner. Which resources should I be using to build and demonstrate these skills?
Check out my earlier answer to Ronnie, who overlaps substantially with your situation!
“How deeply” is a hard question! A good process to practice (and get better at!) is “working backwards from a goal.” So in addition to laying a bottom-up foundation (e.g. through this guide on research engineering) , think about the top-down shortest path between where you want to be and where you are. This requires (by definition!) having a clear picture of where you want to go, so sift through various roles, orgs, theories of change, worldviews, etc. and think through what kind research projects/skills/experiences lie on the critical path to your targets.
Being a “self-directed learner” is often about trying stuff and figuring out what works for you, but there’s tons of high quality content on becoming better at this, e.g. this piece by a Google DeepMind AI safety researcher.
You can demonstrate your skills in many ways! Some ideas include: write a blog, post on youtube, do excellent projects on github, and work on open-source repos. This list I maintain has some more ideas, as well as some AI safety researchers’ githubs for inspiration.
I’m glad to hear you’re excited about this! I made a list of technical AI safety upskilling resources for someone in your position. I highly recommend the “Shallow review...” article to get a broad overview of the field and see which research agendas excite you and match your potential skillset. Testing your fit with small projects on evenings/weekends can help you do this as well. There are a few “expert advice” articles in there that can speak to object level skills, though it will vary by sub-field and by research engineer vs research scientist roles, etc.
In my opinion this is a great field for self-directed learners. There is so much out in the open on the internet (courses listed in the above link), so many arxiv.org papers to read, so much discussion happening out in the open on twitter or Alignment Forum or here on the EA Forum, etc. I’d recommend sharing your work in public once you have some cool results to throw on GitHub or in a post, even if you’re just implementing a paper or exploring part of a research methodology. It’ll be valuable to make your skills legible and get feedback as you develop them. Relatedly, networking can be really valuable for hearing what projects or research agendas others are excited about and getting more concrete feedback. Conferences can be one way to do this, but so can BlueDot courses, local university AI safety or EA groups, outreach to alumni or academics in your network, etc. Try to be succinct and specific when you do such outreach, engaging with their work in particular.
As a newcomer and CS undergrad student, I’m considering pursuing a career in safety research. I’d like to know how deeply I need to understand the core concepts of math (linear algebra, probability), machine learning (model training, loss functions, evaluation), and programming (testing, version control, reproducible workflows) to meaningfully contribute to a research project. I’d also like to know how can I succeed as a largely self-directed learner. Which resources should I be using to build and demonstrate these skills?
Hi Stormo,
Check out my earlier answer to Ronnie, who overlaps substantially with your situation!
“How deeply” is a hard question! A good process to practice (and get better at!) is “working backwards from a goal.” So in addition to laying a bottom-up foundation (e.g. through this guide on research engineering) , think about the top-down shortest path between where you want to be and where you are. This requires (by definition!) having a clear picture of where you want to go, so sift through various roles, orgs, theories of change, worldviews, etc. and think through what kind research projects/skills/experiences lie on the critical path to your targets.
Being a “self-directed learner” is often about trying stuff and figuring out what works for you, but there’s tons of high quality content on becoming better at this, e.g. this piece by a Google DeepMind AI safety researcher.
You can demonstrate your skills in many ways! Some ideas include: write a blog, post on youtube, do excellent projects on github, and work on open-source repos. This list I maintain has some more ideas, as well as some AI safety researchers’ githubs for inspiration.
I’m glad to hear you’re excited about this! I made a list of technical AI safety upskilling resources for someone in your position. I highly recommend the “Shallow review...” article to get a broad overview of the field and see which research agendas excite you and match your potential skillset. Testing your fit with small projects on evenings/weekends can help you do this as well. There are a few “expert advice” articles in there that can speak to object level skills, though it will vary by sub-field and by research engineer vs research scientist roles, etc.
In my opinion this is a great field for self-directed learners. There is so much out in the open on the internet (courses listed in the above link), so many arxiv.org papers to read, so much discussion happening out in the open on twitter or Alignment Forum or here on the EA Forum, etc. I’d recommend sharing your work in public once you have some cool results to throw on GitHub or in a post, even if you’re just implementing a paper or exploring part of a research methodology. It’ll be valuable to make your skills legible and get feedback as you develop them. Relatedly, networking can be really valuable for hearing what projects or research agendas others are excited about and getting more concrete feedback. Conferences can be one way to do this, but so can BlueDot courses, local university AI safety or EA groups, outreach to alumni or academics in your network, etc. Try to be succinct and specific when you do such outreach, engaging with their work in particular.