One, perhaps underrated, AI risk.
In March 2022, a former police captain of the Russian Interior Ministry had three telephone conversations with friends and former colleagues. Being a native of Ukraine, in those private conversations he sharply and passionately criticised the invasion. The former policeman was promptly arrested and 13 months later, in April 2023 was sentenced to 7 years imprisonment (the prosecutor asked for 9 years). The prosecution argued that the conversations were public speeches because they were wiretapped, and they inflicted psychological damage on the public. During the court hearings the “public” – an operative who listened to the wiretaps – complained that listening to the conversations he experienced “a feeling of anxiety, fear and insecurity”.
Sadly, this story is not unique. Oppressive regimes have rich histories of suppressing dissent, often by whatever it takes and with little regard to legality. No one should doubt their willingness to ruthlessly quash opposition – to stamp a boot on a human face, as Orwell so chillingly put it in 1984.
However, this particular case falls into a separate category, where a government searching for dissent tries to probe as deep as possible. AI would certainly can become a highly effective instrument to help reach that objective. It is already widely used to process visual information to locate and track people. One of the next steps will likely be a significant increase in monitoring communications by people living under oppressive regimes. At some point, this could possibly be upgraded to real-time monitoring, with dissenters identified fast and with significant accuracy. Prosecution cases can also be prepared promptly and accurately by AIs and forwarded to courts, which will be advised or even chaired by other AIs.
You may also add to that emerging brain-computer technologies – see a post by @Jack A New X-Risk Factor: Brain-Computer Interfaces . This mix will make the effective identification of thoughtcrime – the golden standard and cherished objective of any self-respecting oppressive regime – suddenly appearing to be within technological reach. With it may come totally dystopian countries populated by people degraded into compliant biorobots.
@Toby_Ord writes in The Precipice (partially referring to Nick Bostrom):
“In a future reminiscent of George Orwell’s 1984, the entire world has become locked under the rule of an oppressive totalitarian regime determined to perpetuate itself. Through powerful technologically-enabled indoctrination, surveillance, and enforcement, it has become impossible for even a handful of dissidents to find each other, let alone stage an uprising. With everyone on Earth living under such rule, the regime is stable from threats internal and external. If such a regime could be maintained indefinitely, then descent into this totalitarian future would also have much in common with extinction, just a narrow range of terrible futures remaining and no way out.
Following Bostrom, I shall call these existential catastrophes, defining them as follows. An existential catastrophe is the destruction of humanity’s long-term potential. An existential risk is a risk that threatens the destruction of humanity’s long-term potential.”
What can be done to counteract this risk? It appears that three lines of actions can make an impact:
1) Degrading scientific potential in key technological areas that can facilitate advanced technological mass population control. Comprehensive programmes must be adopted by democratic societies to attract and retain researchers working in such fields under oppressive regimes. They will need assistance in their relocation with families via expedited visa programmes, job placement assistance and research grants.
2) Enhance the protection of scientific and technological advancements in sensitive areas. Of course, such measures have not prevented the USSR from replicating the nuclear bomb, but in the case of advanced AI technologies even a time lag may prove to be a significant advantage for democratic societies (as strongly argued by @leopold in https://situational-awareness.ai). Export controls, international cooperation and advanced cybersecurity will play an important role.
3) Publicity – timely disclosure of technological projects undertaken by oppressive regimes in the field of advanced technological mass population control. Investigative journalism, programmes to protect whistleblowers, informing people living under oppressive regimes about the possible dangers – all this will create additional difficulties in implementing “Big Brother” technological programmes by the oppressive regimes.
Of course, such efforts require resources far beyond the scope of EA movement. But what the EA community can do is to keep informing the public and advising the governments in the democratic countries where it has presence.
The misuse of AI by non-democratic regimes poses an existential risk by threatening humanity’s long-term potential. It’s imperative that we recognize and address this risk proactively, to safeguard our future against the rise of digital totalitarianism.
P.S.
1) A bit of positive news in the story of the convicted ex-policeman. In May 2024, the court of appeals unexpectedly sent the case to the first instance court for a new review. In July 2024 the first instance court overturned the conviction and returned the case to the prosecution. However, he still remains in jail awaiting new court hearings. Soon it will be thousand days since he was imprisoned. So far, there are no reasons to expect that he will be freed soon.
2) On 27 Nov 2024, Russia and Iran signed a few cooperation agreements on AI.
https://en.irna.ir/news/85412928/Iran-Russia-sign-AI-cooperation-document
https://tehrantimes.com/news/495933/Tehran-Moscow-to-cooperate-on-AI-ethics
Executive summary: The misuse of AI by oppressive regimes to suppress dissent and control populations poses an existential risk to humanity’s long-term potential, necessitating proactive measures to counteract this threat.
Key points:
Oppressive regimes are increasingly using AI to monitor communications and identify dissent, as exemplified by the case of a Russian ex-policeman sentenced to 7 years in prison for criticizing the invasion of Ukraine in private conversations.
The combination of AI, surveillance, and emerging brain-computer technologies could enable oppressive regimes to create dystopian societies where thoughtcrime is effectively identified and punished, leading to an existential catastrophe.
To counteract this risk, democratic societies should: a) attract and retain researchers from oppressive regimes, b) enhance protection of sensitive scientific and technological advancements, and c) publicize technological projects undertaken by oppressive regimes to control populations.
The EA community can contribute by informing the public and advising governments in democratic countries about the existential risk posed by the misuse of AI by non-democratic regimes.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.