I think that they might have been better off if they’d instead spent their effort trying to become really good at ML in the hope of being better skilled up with the goal of working on AI safety later.
I’m broadly sympathetic to this, but I also want to note that there are some research directions in mainstream ML which do seem significantly more valuable than average. For example, I’m pretty excited about people getting really good at interpretability, so that they have an intuitive understanding of what’s actually going on inside our models (particularly RL agents), even if they have no specific plans about how to apply this to safety.
I’m broadly sympathetic to this, but I also want to note that there are some research directions in mainstream ML which do seem significantly more valuable than average. For example, I’m pretty excited about people getting really good at interpretability, so that they have an intuitive understanding of what’s actually going on inside our models (particularly RL agents), even if they have no specific plans about how to apply this to safety.