It seems that it is generally useful for people aiming to get a career in AI Safety to have some capability in ML software engineering. Some of the most straightforward ways I’ve come across are:
fast.ai-like or Coursera courses.
Bootcamps, such as the one recently organized by Reedwood (https://forum.effectivealtruism.org/posts/iwTr8S8QkutyYroGy/apply-to-the-ml-for-alignment-bootcamp-mlab-in-berkeley-jan).
Probably the second is one of the best options but requires some career flexibility that is not available to everyone. On the other hand, after doing a few of the most popular ML courses, I still struggle quite a lot to code working solutions either in ML or RL, even if I understand all the math underneath.
Would a ML engineering fellowship be something useful for the community?
I think that working with a small group of colleagues in implementing some specially chosen problem or paper, and with some availability for supervision would be really useful to learn quickly without getting stuck for really long times. Of course, I know of these incredible cowboys of GitHub who indeed do this alone, but I would personally find this very valuable.
Programs like the OpenAI Residency may be a good idea. You may also want to consider something like Interning at somewhere like Deepmind, CHAI or Cohere. There is also alot of mentorship in the Eleuther Discord. We are in a time where highly skilled EA aligned engineers are very expensive in both time and money and under shorter timelines it may not make sense for any individual engineer to give up time on a program like this. If something like this didn’t exist in 2⁄3 years time I would be very interested in running a program like this.
I may be interested. I’m an ML master’s student but I have close to zero experience in ML implementation.
+1. Useful and seemingly quite bottlenecked realm; demand seems unlikely to decrease anytime soon and existing barriers to entry are significant.
Have thought about this a fair amount lately and appreciate your post.