On the one hand, we don’t want researchers at Google to feel any reluctance to blow the whistle on ethical issues with Google’s AI algorithms.
On the other hand, I’m not convinced that the original founders of the AI ethics group were the right people for the job—you mentioned radicalization; one of them responded with “You can go fuck yourself” when asked a question about the ethics of political violence. The new ethics head says “what I’d like to do is have people have [the conversation about AI ethics] in a more diplomatic way”, which seems like a good thing. I’m not optimistic about a future where the ethics of our AIs are determined by whoever yells the loudest on social media, but currently the ethics discussion in the ML community seems very heated.
I feel very conflicted about this.
On the one hand, we don’t want researchers at Google to feel any reluctance to blow the whistle on ethical issues with Google’s AI algorithms.
On the other hand, I’m not convinced that the original founders of the AI ethics group were the right people for the job—you mentioned radicalization; one of them responded with “You can go fuck yourself” when asked a question about the ethics of political violence. The new ethics head says “what I’d like to do is have people have [the conversation about AI ethics] in a more diplomatic way”, which seems like a good thing. I’m not optimistic about a future where the ethics of our AIs are determined by whoever yells the loudest on social media, but currently the ethics discussion in the ML community seems very heated.
For context, the specific ‘question about the ethics of political violence’ was itself somewhat inflammatory:
”So you’re in favor of mob violence, as long as it comes from the left?”
https://twitter.com/pmddomingos/status/1346940377840848898