You can make a pretty good case for regulating AI deployment even if you’re an AI x-risk skeptic like myself. The simple point being that companies and sometimes even governments are deploying algorithms that they don’t fully understand, and that the more power is being given over to these algorithms, the greater potential damage from the code going wrong. I would guess that AI “misalignment” already has a death toll, my go-to example being mass shooters whose radicalisation was aided by social media algorithms. Add to that the issues with algorithmic bias, the use of predictive policing and so on, and the case for some sort of regulation is pretty clear.
You can make a pretty good case for regulating AI deployment even if you’re an AI x-risk skeptic like myself. The simple point being that companies and sometimes even governments are deploying algorithms that they don’t fully understand, and that the more power is being given over to these algorithms, the greater potential damage from the code going wrong. I would guess that AI “misalignment” already has a death toll, my go-to example being mass shooters whose radicalisation was aided by social media algorithms. Add to that the issues with algorithmic bias, the use of predictive policing and so on, and the case for some sort of regulation is pretty clear.