The OpenAI and DeepMind posts you linked aren’t necessarily relevant, e.g. the Software Engineer, Science role is not for DeepMind’s safety team, and it’s pretty unclear to me whether the OpenAI ML engineer role is safety-relevant.
This seems plausible, but also quite distinct from the claim that “roles for programmers in direct work tend to sit open for a long time”, which I took the list of openings to be supporting evidence for.
The OpenAI and DeepMind posts you linked aren’t necessarily relevant, e.g. the Software Engineer, Science role is not for DeepMind’s safety team, and it’s pretty unclear to me whether the OpenAI ML engineer role is safety-relevant.
My model is that if you want to move from generic software engineering to safety work that these would be very good next steps.
This seems plausible, but also quite distinct from the claim that “roles for programmers in direct work tend to sit open for a long time”, which I took the list of openings to be supporting evidence for.