Fantastic article, I ended up sending this to quite a few friends who are great programmers and considering careers in AI Safety.
I think the litmus test is great, we need more like this!
If someone is unsure about whether they are able to make a significant contribution to an AI library, they have to be able to come up with an idea for a meaningful contribution that they can implement. That requires extensive knowledge of the AI library and its shortcomings. Is this part of the litmus test?
If not, it might make sense for AI safety engineers to publicize a difficult toy-problem or two that would allow a prospective candidates to immediately get a sense of their abilities, without needing to think too much about what type of pull-request they would do.
Fantastic article, I ended up sending this to quite a few friends who are great programmers and considering careers in AI Safety.
I think the litmus test is great, we need more like this!
If someone is unsure about whether they are able to make a significant contribution to an AI library, they have to be able to come up with an idea for a meaningful contribution that they can implement. That requires extensive knowledge of the AI library and its shortcomings. Is this part of the litmus test?
If not, it might make sense for AI safety engineers to publicize a difficult toy-problem or two that would allow a prospective candidates to immediately get a sense of their abilities, without needing to think too much about what type of pull-request they would do.