Stimulating legal response for AI misuse sounds like a great direction! The legal field around AI is super-vague now, so helping to define it properly could be a really good thing. Though I would adjust that complaining about chat-bot gaslighting can have the opposite effect by creating noise and reducing attention to more important issues. The other potential problem is that if public actions on AI are immediately punished, it would only make all AI research even more closed. It also would strengthen protective mechanism of the big corporation (the ‘antifragility’ idea).
My impression is that we need to maximize for strong reaction on big fuck-ups from AI use. And those fuck-ups will inevitably follow, as it happens with all experimental technologies. So, maybe, focusing on stronger cases?
Stimulating legal response for AI misuse sounds like a great direction! The legal field around AI is super-vague now, so helping to define it properly could be a really good thing. Though I would adjust that complaining about chat-bot gaslighting can have the opposite effect by creating noise and reducing attention to more important issues. The other potential problem is that if public actions on AI are immediately punished, it would only make all AI research even more closed. It also would strengthen protective mechanism of the big corporation (the ‘antifragility’ idea).
My impression is that we need to maximize for strong reaction on big fuck-ups from AI use. And those fuck-ups will inevitably follow, as it happens with all experimental technologies. So, maybe, focusing on stronger cases?
Yes, and seminal cases.