Thanks for the comment Dan. I agree that the adversarially mined examples literature is the right reference class, of which the two that you mention (Meta’s Dynabench and ANLI) were the main examples (maybe the only examples? I forget) while we were working on this project.
I’ll note that Meta’s Dynabench sentiment model (the only model of theirs that I interacted with) seemed substantially less robust than Redwood’s classifier (e.g. I was able to defeat it manually in about 10 minutes of messing around, whereas I needed the tools we made to defeat the Redwood model).
I think the adversarial mining thing was hot in 2019. IIRC, Hellaswag and others did it; I’d venture maybe 100 papers did it before RR, but I still think it was underexplored at the time and I’m happy RR investigated it.
Thanks for the comment Dan. I agree that the adversarially mined examples literature is the right reference class, of which the two that you mention (Meta’s Dynabench and ANLI) were the main examples (maybe the only examples? I forget) while we were working on this project.
I’ll note that Meta’s Dynabench sentiment model (the only model of theirs that I interacted with) seemed substantially less robust than Redwood’s classifier (e.g. I was able to defeat it manually in about 10 minutes of messing around, whereas I needed the tools we made to defeat the Redwood model).
I think the adversarial mining thing was hot in 2019. IIRC, Hellaswag and others did it; I’d venture maybe 100 papers did it before RR, but I still think it was underexplored at the time and I’m happy RR investigated it.