I’m glad to hear you’re excited about this! I made a list of technical AI safety upskilling resources for someone in your position. I highly recommend the “Shallow review...” article to get a broad overview of the field and see which research agendas excite you and match your potential skillset. Testing your fit with small projects on evenings/weekends can help you do this as well. There are a few “expert advice” articles in there that can speak to object level skills, though it will vary by sub-field and by research engineer vs research scientist roles, etc.
In my opinion this is a great field for self-directed learners. There is so much out in the open on the internet (courses listed in the above link), so many arxiv.org papers to read, so much discussion happening out in the open on twitter or Alignment Forum or here on the EA Forum, etc. I’d recommend sharing your work in public once you have some cool results to throw on GitHub or in a post, even if you’re just implementing a paper or exploring part of a research methodology. It’ll be valuable to make your skills legible and get feedback as you develop them. Relatedly, networking can be really valuable for hearing what projects or research agendas others are excited about and getting more concrete feedback. Conferences can be one way to do this, but so can BlueDot courses, local university AI safety or EA groups, outreach to alumni or academics in your network, etc. Try to be succinct and specific when you do such outreach, engaging with their work in particular.
I’m glad to hear you’re excited about this! I made a list of technical AI safety upskilling resources for someone in your position. I highly recommend the “Shallow review...” article to get a broad overview of the field and see which research agendas excite you and match your potential skillset. Testing your fit with small projects on evenings/weekends can help you do this as well. There are a few “expert advice” articles in there that can speak to object level skills, though it will vary by sub-field and by research engineer vs research scientist roles, etc.
In my opinion this is a great field for self-directed learners. There is so much out in the open on the internet (courses listed in the above link), so many arxiv.org papers to read, so much discussion happening out in the open on twitter or Alignment Forum or here on the EA Forum, etc. I’d recommend sharing your work in public once you have some cool results to throw on GitHub or in a post, even if you’re just implementing a paper or exploring part of a research methodology. It’ll be valuable to make your skills legible and get feedback as you develop them. Relatedly, networking can be really valuable for hearing what projects or research agendas others are excited about and getting more concrete feedback. Conferences can be one way to do this, but so can BlueDot courses, local university AI safety or EA groups, outreach to alumni or academics in your network, etc. Try to be succinct and specific when you do such outreach, engaging with their work in particular.