Thanks Dan. I agree that industry is more significant (despite the part of it which publishes being ~10x smaller than academic AI research). If you have some insight into the size and quality of the non-publishing part, that would be useful.
Do language models default to racism? As I understand the Tay case, it took thousands of adversarial messages to make it racist.
Agree that the elision of trust and trustworthiness is very toxic. I tend to ignore XAI.
Thanks Dan. I agree that industry is more significant (despite the part of it which publishes being ~10x smaller than academic AI research). If you have some insight into the size and quality of the non-publishing part, that would be useful.
Do language models default to racism? As I understand the Tay case, it took thousands of adversarial messages to make it racist.
Agree that the elision of trust and trustworthiness is very toxic. I tend to ignore XAI.