I strongly agree that current LLM’s don’t seem to pose a risk of a global catastrophe, but I’m worried about what might happen when LLM’s are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I’m envisioning something like a LLM virtual assistant which leads to a lot of lost productivity and some security breaches but nothing too catastrophic, which makes people take AI safety seriously, slowing progress on more advanced AI, perhaps.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work?
Given that AI is being developed by companies running on a “move fast and break things” philosophy, a spectacular failure of some sort is all but guaranteed.
It’d have to bigger than mere lost productivity to slow things down though. Social media algorithms arguably already have a body count (via radicalisation), and those have not been slowed down.
I strongly agree that current LLM’s don’t seem to pose a risk of a global catastrophe, but I’m worried about what might happen when LLM’s are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I’m envisioning something like a LLM virtual assistant which leads to a lot of lost productivity and some security breaches but nothing too catastrophic, which makes people take AI safety seriously, slowing progress on more advanced AI, perhaps.
A complete spitball.
Given that AI is being developed by companies running on a “move fast and break things” philosophy, a spectacular failure of some sort is all but guaranteed.
It’d have to bigger than mere lost productivity to slow things down though. Social media algorithms arguably already have a body count (via radicalisation), and those have not been slowed down.
Very fair response, thanks!