I’m a fairly good ML student who wants to decide on a research direction for AI Safety.
I’m not actually sure whether I think it’s a good idea for ML students to try to work on AI safety. I am pretty skeptical of most of the research done by pretty good ML students who try to make their research relevant to AI safety—it usually feels to me like their work ends up not contributing to one of the core difficulties, and I think that they might have been better off if they’d instead spent their effort trying to become really good at ML in the hope of being better skilled up with the goal of working on AI safety later.
I don’t have very much better advice for how to get started on AI safety; I think the “recommend to apply to AIRCS and point at 80K and maybe the Alignment Newsletter” path is pretty reasonable.
I think that they might have been better off if they’d instead spent their effort trying to become really good at ML in the hope of being better skilled up with the goal of working on AI safety later.
I’m broadly sympathetic to this, but I also want to note that there are some research directions in mainstream ML which do seem significantly more valuable than average. For example, I’m pretty excited about people getting really good at interpretability, so that they have an intuitive understanding of what’s actually going on inside our models (particularly RL agents), even if they have no specific plans about how to apply this to safety.
I’m not actually sure whether I think it’s a good idea for ML students to try to work on AI safety. I am pretty skeptical of most of the research done by pretty good ML students who try to make their research relevant to AI safety—it usually feels to me like their work ends up not contributing to one of the core difficulties, and I think that they might have been better off if they’d instead spent their effort trying to become really good at ML in the hope of being better skilled up with the goal of working on AI safety later.
I don’t have very much better advice for how to get started on AI safety; I think the “recommend to apply to AIRCS and point at 80K and maybe the Alignment Newsletter” path is pretty reasonable.
I’m broadly sympathetic to this, but I also want to note that there are some research directions in mainstream ML which do seem significantly more valuable than average. For example, I’m pretty excited about people getting really good at interpretability, so that they have an intuitive understanding of what’s actually going on inside our models (particularly RL agents), even if they have no specific plans about how to apply this to safety.