[Question] Do you worry about totalitarian regimes using AI Alignment technology to create AGI that subscribe to their values?

Hi all! First I want to say that I really enjoyed this forum in the past few months, and eventually decided to create an account to post this question. I am still in the process of writing the short version of this question, so thank you for bearing with me in the long version.

As some of you may know, last year we have seen unprecedented uprising against totalitarian regimes. As a Chinese national active in the diaspora dissent community, I have never been more encouraged by the courage and creativity of my people; as an ML practitioner, I am more and more worried about AGI being the most powerful governing tool humanity has seen yet.

China’s Zero-COVID policy gave us a first taste of what this future would feel like—personal location tracking limits your freedom of mobility; a remote “system” will decide what you can or cannot do and you will be informed via an app on your phone; when you try to push back, it is like trying to hit a bureaucratic wall.

Most importantly, Zero-COVID gave rise to a whole value system that sees society as a simplified trolley problem: the government—an all-knowing entity—holds the lever, and will be deciding what is best for the whole. Collectivism is equivalent to altruism, individualism is equivalent to being selfish, and the most honorable thing for an individual to do is to obey. This value system is pretty compelling, and has been pushed into every grade school kid. US’s failure and massive death toll is also a convenient gotcha.

Needless to say many people in China do not subscribe to this value, but many people do, and more often than not it is the latter group that are the agent of your day-to-day act of suppression. The policy eventually collapsed partially due to uprising, but even during the height of the uprising there were still significant momentum on the pro-Zero-COVID side for the policy to keep going. My suspicion is what eventually brought down Zero-COVID was the unbearable price tag, especially for local governments. However, I can totally see if COVID happened in 2030 instead of 2020 (1o years are nothing in earth years), the price tag will be much sustainable.

It is no news that 1) AI tend to converge to monopoly, and 2) totalitarian regimes will want to use AI to extend their power. We also know that 3) AI alignment seeks to build the ability for us to embed our values into AI. I deeply worry about the gentle seduction of AI technology in China, seducing us to yield more and more of our agency to an AGI that may align with a value system that represent the interest of the ruling entity, and there will be less and less room for pushing back.