That’s a great resource to navigate my self study in ML! Thank you for compiling this list.
I wonder if a pull request to some popular machine learning library or tool counts as a step towards AI Safety Research. Say, a PR implements some safety feature for PyTorch, e.g. in interpretability, adversarial training, or other. Would it go to Level 6 as it is reimplementing papers? Making PR is, arguable, takes more efforts than just reimplementating a paper as it needs to fit into a tool.
That’s a great resource to navigate my self study in ML! Thank you for compiling this list.
I wonder if a pull request to some popular machine learning library or tool counts as a step towards AI Safety Research. Say, a PR implements some safety feature for PyTorch, e.g. in interpretability, adversarial training, or other. Would it go to Level 6 as it is reimplementing papers? Making PR is, arguable, takes more efforts than just reimplementating a paper as it needs to fit into a tool.