inverse reinforcement learning could allow AI systems to learn to model the current preferences and likely media reactions of populations, allowing new AI propaganda systems to pre-test ideological messaging with much more accuracy, shaping gov’t ‘talking points’, policy rationales, and ads to be much more persuasive.
The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they’ve always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don’t see why the balance would be shifted.
Likewise, the big US, UK, EU media conglomerates could weaponize AI ideological engineering systems to shape more effective messaging in their TV, movies, news, books, magazines, music, and web sites—insofar as they have any ideologies to promote.
Likewise, the same reasoning goes for small and independent media and activist groups.
Compared to other AI applications, suppressing ‘wrong-think’ and promoting ‘right-think’ seems relatively easy. It requires nowhere near AGI. Data mining companies such as Youtube, Facebook, and Twitter are already using semi-automatic methods to suppress, censor, and demonetize dissident political opinions. And governments have strong incentives to implement such programs quickly and secretly, without any public oversight (which would undermine their utility by empowering dissidents to develop counter-strategies). Near-term AI ideological control systems don’t even have to be as safe as autonomous vehicles, since their accidents, false positives, and value misalignments would be invisible to the public, hidden deep within the national security state.
Yeah, it is a problem, though I don’t think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the ‘Voat Phenomenon’ (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I’m sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.
The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they’ve always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don’t see why the balance would be shifted.
Likewise, the same reasoning goes for small and independent media and activist groups.
Yeah, it is a problem, though I don’t think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the ‘Voat Phenomenon’ (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I’m sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.