Redwood Research is looking for people to help us find flaws in our injury-detecting model. We’ll pay $30/hour for this, for up to 2 hours; after that, if you’ve found interesting stuff, we’ll pay you for more of this work at the same rate. I expect our demand for this to last for maybe a month (though we’ll probably need more in future).
If you’re interested, please email adam@rdwrs.com so he can add you to a Slack or Discord channel with other people who are working on this. This might be a fun task for people who like being creative, being tricky, and figuring out how language models understand language.
You can try out the interface here. The task is to find things that the model classifies as non-injurious that are actually injurious according to our definition. Full instructions here.
Redwood Research is looking for people to help us find flaws in our injury-detecting model. We’ll pay $30/hour for this, for up to 2 hours; after that, if you’ve found interesting stuff, we’ll pay you for more of this work at the same rate. I expect our demand for this to last for maybe a month (though we’ll probably need more in future).
If you’re interested, please email adam@rdwrs.com so he can add you to a Slack or Discord channel with other people who are working on this. This might be a fun task for people who like being creative, being tricky, and figuring out how language models understand language.
You can try out the interface here. The task is to find things that the model classifies as non-injurious that are actually injurious according to our definition. Full instructions here.
This is in service of this research project.
EDIT: update wage from $20/hour to $30/hour.
If you tweet about this I’ll tag it with @effective_jobs.