I wouldn’t want to put much trust in evolution-based arguments when talking about the long-term future of human civilization, because technology seems so unpredictable, and might offer many ways to stay ahead of any problems thrown up by the slow process of biological evolution:
Maybe a totalitarian world government arises, and then performs advanced genetic engineering to make humans much more docile and willing to submit to authority. Climbing out of that hole could take a long time; maybe the new equilibrium would even be self-reinforcing somehow.
Maybe surveillance and mind-reading technologies make it incredibly easy for an oppressive totalitarian system to maintain itself against revolt, so it doesn’t really matter how the humans’ biology is slowly changing over the millennia—as long as pro-authoritarian forces stay decidedly ahead of anti-authoritarian forces, the balance will always tend towards lock-in.
Over the past several hundred million years, multicellular organisms have successfully managed to maintain control over individual cells (who might try and “rebel” against the host body by becoming cancerous). It’s not like being multicellular has gotten any harder as time went on; if anything it’s probably gotten easier. In the same way, society might develop better and better mechanisms for policing and suppressing internal dissent over time.
Maybe the totalitarian world government goes on a mass sterilization campaign (perhaps putting sterilizing chemicals into the environment), and creates new children via artificial wombs or by giving select citizens a treatment to reverse the sterilization process. This would obviously throw a wrench into the way that demographic selection effects operate today.
Maybe humanity develops a cure for aging, or people upload their minds into computers, in a way that makes them immortal. Instead of a world government that has to worry about transition of power and a changing populace over centuries, maybe everyone is unified under a single dictator who can consolidate and wield power indefinitely. (Consider the many dictators—Stalin and Mao, Robert Mugabe, the Kim Il-Sung—who rule uncontested right up to the moment of their deaths.)
Jackson—thanks for the interesting examples. Have you written anything more detailed about any of these, or know anyone who has?
Some of these sounds technically feasible within a few decades or centuries, but most raise the issue—what motivation would the powerful people/AIs/whatever running society have for doing any of these things? Some of them sound pointlessly sadistic, costly, and unaligned with the powerful beings’ interests. (For example, why perpetuate a species of docile post-human submissives, instead of just automating whatever one wants to do? Why keep copies of everyone’s uploaded consciousness, if they’re not actually smart and empowered enough to actually do anything useful?)
I’d love to see some serious game theory analysis of these kinds of scenarios—e.g. which kinds of powerful elite behavior (in perpetuating a ‘global totalitarian state’) would actually make any rational sense across millennia? Versus which are more like Black Mirror dystopian fantasies that don’t actually make sense in terms of anyone’s long-term interests?
Maybe there are other related subcultures beyond EA, where the idea of stable totalitarianism has been given more thought? Crypto people are pretty libertarian/paranoid, so maybe they have good takes on this stuff? Dunno...
One related area where people (including myself) have written a bit more is the “vulnerable world hypothesis”—situations where you might actually need global totalitarianism in order for humanity to control an incredibly dangerous technology.
Don’t historians often write about how the totalitarian governments of the 20th century were enabled by various new technologies? (ie, radio and newspapers for propagating ideology, advances in bureaucratic administration that helped nations keep tabs on millions of individual citizens, etc? People are always mentioning things like how IBM made those punch-card machines that the Nazis used to help organize the holocaust.) I don’t think that stable totalitarianism is very plausible with modern-day technology. But new technology is being developed all the time—the fear is that just like the 20th century made “totalitarianism” possible for the first time, the balance of new technology might shift in a way that favors centralization of government power even more strongly.
Political commentators often mention that China has developed a lot of innovative high-tech methods for controlling its Uighur population: AI-based facial tracking and gait analysis to identify people’s movements around the city, social credit scores to lock them out of opportunities, forced sterilization to reduce birthrates, etc. Obviously China’s innovations aren’t good enough that they’ll be able to outcompete the free world and attain perfect global hegemony, or anything like that! But technology is unpredictable; future surveillance tech might give much bigger advantages to authoritarian systems.
I agree that historically, new technologies often allow new forms of political control (but also new forms of political resistance and rebellion). We’re seeing this with social media and algorithmic ‘bubble formation’ that increases polarization.
Your last paragraph identifies what I think is the latent fear among many EAs: when they talk about a ‘permanent global totalitarian state’, I think they’re often implicitly extrapolating from the current Chinese state, and imagining it augmented by much stronger AI. Trouble is, I think these fears are often (but not always) based on some pretty serious misunderstandings of China, and its history, government, economy, culture, and ethos.
By most objective standards, I think the CCP over the last 100 years has actually been more adaptable, dynamic, and flexible in its approach to policy changes than most ‘liberal democracies’ have been—with diverse approaches ranging from Mao’s centralized economic control to Mao’s cultural revolution to Deng’s economic liberalization to Hu’s humble meritocracy to Xi’s re-assertive nationalism. Decade by decade, China’s policies change quite dramatically, even as the CCP remains in power. By contrast, Western ‘liberal democracies’ tend to be run by the same deep state bureaucrats and legislatively gridlocked duopolies that rarely deviate from a post-WWII centrist status quo. Anyway, I think EAs interested in whether ‘China + AI’ provides a credible model for a ‘permanent totalitarian state’ could often benefit from learning a bit more about Chinese history over the last century. (Recommended podcasts: ‘China Talk’ and ‘China History Podcast’).
This post itself sounds very misinformed about CCP history over the past hundred years.
Yes, the CCP changes, but not its underlying logic of unlimited power, and all the dangers associated with it.
Yes, it adapts to external environment to survive, but the domestic costs of doing so cannot be lightly overlooked—such as some of the worst famines, political purges, mass-shooting against teenage students, mass imprisonment, forced labour camps (and the list goes on) humanity has ever seen.
There is the tendency among some China watchers, in their eagerness to ‘educate’ the West about China, too quickly adopt the official narrative and history of the CCP. In doing so, they create a dangerous alliance, often out of ignorance more than willingness. Only when one can get over the hook of CCP official propaganda can one truly begin to see China as it is (sometimes it does seem terribly enticing. Hundreds of millions of people literally lifted out of by the Mother Party, rising on the global stage, developing modern technology, etc.). And I’m beginning to come to the view that the moral instincts of ignorant people reacting to phenomena in China are often more laudable than those of ‘experts’, who claim to know subtleties but in effect really are finding hopeless justifications for a morally bankrupt system. I’d recommend reading not Western China watchers but well-respected (and often suppressed) Chinese experts, scholars such as Gao Hua, Qin Hui, Shen Zhihua, to name a few.
I wouldn’t want to put much trust in evolution-based arguments when talking about the long-term future of human civilization, because technology seems so unpredictable, and might offer many ways to stay ahead of any problems thrown up by the slow process of biological evolution:
Maybe a totalitarian world government arises, and then performs advanced genetic engineering to make humans much more docile and willing to submit to authority. Climbing out of that hole could take a long time; maybe the new equilibrium would even be self-reinforcing somehow.
Maybe surveillance and mind-reading technologies make it incredibly easy for an oppressive totalitarian system to maintain itself against revolt, so it doesn’t really matter how the humans’ biology is slowly changing over the millennia—as long as pro-authoritarian forces stay decidedly ahead of anti-authoritarian forces, the balance will always tend towards lock-in.
Over the past several hundred million years, multicellular organisms have successfully managed to maintain control over individual cells (who might try and “rebel” against the host body by becoming cancerous). It’s not like being multicellular has gotten any harder as time went on; if anything it’s probably gotten easier. In the same way, society might develop better and better mechanisms for policing and suppressing internal dissent over time.
Maybe the totalitarian world government goes on a mass sterilization campaign (perhaps putting sterilizing chemicals into the environment), and creates new children via artificial wombs or by giving select citizens a treatment to reverse the sterilization process. This would obviously throw a wrench into the way that demographic selection effects operate today.
Maybe humanity develops a cure for aging, or people upload their minds into computers, in a way that makes them immortal. Instead of a world government that has to worry about transition of power and a changing populace over centuries, maybe everyone is unified under a single dictator who can consolidate and wield power indefinitely. (Consider the many dictators—Stalin and Mao, Robert Mugabe, the Kim Il-Sung—who rule uncontested right up to the moment of their deaths.)
Jackson—thanks for the interesting examples. Have you written anything more detailed about any of these, or know anyone who has?
Some of these sounds technically feasible within a few decades or centuries, but most raise the issue—what motivation would the powerful people/AIs/whatever running society have for doing any of these things? Some of them sound pointlessly sadistic, costly, and unaligned with the powerful beings’ interests. (For example, why perpetuate a species of docile post-human submissives, instead of just automating whatever one wants to do? Why keep copies of everyone’s uploaded consciousness, if they’re not actually smart and empowered enough to actually do anything useful?)
I’d love to see some serious game theory analysis of these kinds of scenarios—e.g. which kinds of powerful elite behavior (in perpetuating a ‘global totalitarian state’) would actually make any rational sense across millennia? Versus which are more like Black Mirror dystopian fantasies that don’t actually make sense in terms of anyone’s long-term interests?
As far as I know, there is really not much EA thought about this idea of “stable totalitarianism”, which is odd considering that it is often brought up right when people are introducing the fundamental logic of “longtermist” EA, as you mentioned. The EA Forum just has a couple oddball articles, including this one brainstorming how we might try to screen out mean-spirited people to prevent them rising to power, this section of a post on Brain-Computer Interfaces on how there is obvious totalitarian potential if you can read the minds of your subjects or directly wire reward/punishment into their brains, this essay by Bryan Caplan, a couple of articles about protecting democracy (although these are more near-term-oriented)… compared to the usual thoroughness that EA often brings to the table, it’s pretty lame!
Maybe there are other related subcultures beyond EA, where the idea of stable totalitarianism has been given more thought? Crypto people are pretty libertarian/paranoid, so maybe they have good takes on this stuff? Dunno...
One related area where people (including myself) have written a bit more is the “vulnerable world hypothesis”—situations where you might actually need global totalitarianism in order for humanity to control an incredibly dangerous technology.
fwiw I think conventional political science literature or most historians would tell the idea is really out there
Don’t historians often write about how the totalitarian governments of the 20th century were enabled by various new technologies? (ie, radio and newspapers for propagating ideology, advances in bureaucratic administration that helped nations keep tabs on millions of individual citizens, etc? People are always mentioning things like how IBM made those punch-card machines that the Nazis used to help organize the holocaust.) I don’t think that stable totalitarianism is very plausible with modern-day technology. But new technology is being developed all the time—the fear is that just like the 20th century made “totalitarianism” possible for the first time, the balance of new technology might shift in a way that favors centralization of government power even more strongly.
Political commentators often mention that China has developed a lot of innovative high-tech methods for controlling its Uighur population: AI-based facial tracking and gait analysis to identify people’s movements around the city, social credit scores to lock them out of opportunities, forced sterilization to reduce birthrates, etc. Obviously China’s innovations aren’t good enough that they’ll be able to outcompete the free world and attain perfect global hegemony, or anything like that! But technology is unpredictable; future surveillance tech might give much bigger advantages to authoritarian systems.
Jackson—thanks for your comment.
I agree that historically, new technologies often allow new forms of political control (but also new forms of political resistance and rebellion). We’re seeing this with social media and algorithmic ‘bubble formation’ that increases polarization.
Your last paragraph identifies what I think is the latent fear among many EAs: when they talk about a ‘permanent global totalitarian state’, I think they’re often implicitly extrapolating from the current Chinese state, and imagining it augmented by much stronger AI. Trouble is, I think these fears are often (but not always) based on some pretty serious misunderstandings of China, and its history, government, economy, culture, and ethos.
By most objective standards, I think the CCP over the last 100 years has actually been more adaptable, dynamic, and flexible in its approach to policy changes than most ‘liberal democracies’ have been—with diverse approaches ranging from Mao’s centralized economic control to Mao’s cultural revolution to Deng’s economic liberalization to Hu’s humble meritocracy to Xi’s re-assertive nationalism. Decade by decade, China’s policies change quite dramatically, even as the CCP remains in power. By contrast, Western ‘liberal democracies’ tend to be run by the same deep state bureaucrats and legislatively gridlocked duopolies that rarely deviate from a post-WWII centrist status quo. Anyway, I think EAs interested in whether ‘China + AI’ provides a credible model for a ‘permanent totalitarian state’ could often benefit from learning a bit more about Chinese history over the last century. (Recommended podcasts: ‘China Talk’ and ‘China History Podcast’).
This post itself sounds very misinformed about CCP history over the past hundred years.
Yes, the CCP changes, but not its underlying logic of unlimited power, and all the dangers associated with it.
Yes, it adapts to external environment to survive, but the domestic costs of doing so cannot be lightly overlooked—such as some of the worst famines, political purges, mass-shooting against teenage students, mass imprisonment, forced labour camps (and the list goes on) humanity has ever seen.
There is the tendency among some China watchers, in their eagerness to ‘educate’ the West about China, too quickly adopt the official narrative and history of the CCP. In doing so, they create a dangerous alliance, often out of ignorance more than willingness. Only when one can get over the hook of CCP official propaganda can one truly begin to see China as it is (sometimes it does seem terribly enticing. Hundreds of millions of people literally lifted out of by the Mother Party, rising on the global stage, developing modern technology, etc.). And I’m beginning to come to the view that the moral instincts of ignorant people reacting to phenomena in China are often more laudable than those of ‘experts’, who claim to know subtleties but in effect really are finding hopeless justifications for a morally bankrupt system. I’d recommend reading not Western China watchers but well-respected (and often suppressed) Chinese experts, scholars such as Gao Hua, Qin Hui, Shen Zhihua, to name a few.