Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work?
Given that AI is being developed by companies running on a “move fast and break things” philosophy, a spectacular failure of some sort is all but guaranteed.
It’d have to bigger than mere lost productivity to slow things down though. Social media algorithms arguably already have a body count (via radicalisation), and those have not been slowed down.
Given that AI is being developed by companies running on a “move fast and break things” philosophy, a spectacular failure of some sort is all but guaranteed.
It’d have to bigger than mere lost productivity to slow things down though. Social media algorithms arguably already have a body count (via radicalisation), and those have not been slowed down.
Very fair response, thanks!