Ideological engineering and social control: A neglected topic in AI safety research?

Will enhanced government control of populations’ behaviors and ideologies become one of AI’s biggest medium-term safety risks?
For example, China seems determined to gain a decisive lead in AI research research by 2030, according to the new plan released this summer by its State Council:
https://​​www.newamerica.org/​​documents/​​1959/​​translation-fulltext-8.1.17.pdf
One of China’s key proposed applications is promoting ‘social stability’ and automated ‘social governance’ through comprehensive monitoring of public spaces (through large-scale networks of sensors for face recognition, voice recognition, movement patterns, etc) and social media spaces (through large-scale monitoring of online activity). This would allow improved ‘anti-terrorism’ protection, but also much easier automated monitoring and suppression of dissident people and ideas. Over the longer term, inverse reinforcement learning could allow AI systems to learn to model the current preferences and likely media reactions of populations, allowing new AI propaganda systems to pre-test ideological messaging with much more accuracy, shaping gov’t ‘talking points’, policy rationales, and ads to be much more persuasive. Likewise, the big US, UK, EU media conglomerates could weaponize AI ideological engineering systems to shape more effective messaging in their TV, movies, news, books, magazines, music, and web sites—insofar as they have any ideologies to promote. (I think it’s become pretty clear that they do.) As people spend more time with augmented reality systems, AI systems might automatically attach visual labels to certain ideas as ‘hate speech’ or certain people as ‘hate groups’, allowing mass automated social ostracism of dissident opinions. As people spend more time in virtual reality environments during education, work and leisure, AI ideological control might become even more intensive, resulting in most citizens spending most of their time in an almost total disconnect from reality. Applications of AI ideological control in mass children’s education seem especially horrifying.
Compared to other AI applications, suppressing ‘wrong-think’ and promoting ‘right-think’ seems relatively easy. It requires nowhere near AGI. Data mining companies such as Youtube, Facebook, and Twitter are already using semi-automatic methods to suppress, censor, and demonetize dissident political opinions. And governments have strong incentives to implement such programs quickly and secretly, without any public oversight (which would undermine their utility by empowering dissidents to develop counter-strategies). Near-term AI ideological control systems don’t even have to be as safe as autonomous vehicles, since their accidents, false positives, and value misalignments would be invisible to the public, hidden deep within the national security state.
AI-enhanced ideological control of civilians by governments and by near-monopoly corporations might turn into ’1984′ on steroids. We might find ourselves in a ‘thought bubble’ that’s very difficult to escape—long before AGI becomes an issue.
This probably isn’t an existential risk, but it could be serious threat to human and animal welfare whenever governments and near-monopolies realize that their interests diverge from those of their citizens and non-human subjects. And it could increase other global catastrophic risks wherever citizen oversight could decrease risks from bioweapons, pandemics, nuclear weapons, other more capable AI systems, etc.
Has anyone written anything good on this problem of AI ideological engineering systems? I’d appreciate any refs, links, or comments.
(I posted a shorter version of this query on the ‘AI Safety Discussion’ group in Facebook.)