I would be interested in hearing more from Caplan about “stable totalitarianism”, but not if it’s just going to be a retread of the abstract concept that stable totalitarianism seems bad, what if Stalin had lived forever, etc. Some questions I’d be interested in:
Is Caplan worried by any of the specific technologies that might make stable totalitarianism more likely? Besides life-extension medical technology, we also have things like:
other AI-powered surveillance tech like facial recognition and gait tracking, or potential AI-powered advancements in lie detection or persuasion/propaganda
Does Caplan see more risk of stable totalitarianism arising from:
The existing dominant international alliance (USA + Europe + Japan + etc) kind of drifting more and more into a situation of global control and totalitarianism over time? (slowly implementing more things like CBDCs and pervasive censorship, perhaps spurred by the legitimate desire to control dangerous technologies like AI and biotech!)
Some individual powerful country (like China, or the USA after a very bad election) going totalitarian and then somehow extending that system to the rest of the world?
Some relatively small country, which doesn’t currently have lots of military power or influence, just innovates a new type of government which is very oppressive but nonetheless economically outcompetes liberal democracy in the long run? (It’s my impression that this is kind of how fascism seemed in WW2, and communism in the early cold war… in the beginning, people were legitimately worried that these totalitarian systems might just be more productive ways of running an economy, even though they were antithetical to human freedom. In the same way that modern China’s government more capable and better-organized than a simple strongman dictatorship, is Caplan worried that it’s possible to somehow bolt together prediction markets + social credit systems + corporate best practices, or whatever, and outcompete the democracies?)
What does Caplan think we can do to make totalitarianism less likely?
Preventative bans / regulation of specifically worrying technologies?
Trying to deliberately develop OTHER technologies which make totalitarianism harder (like, idk, the internet or bitcoin or etc), or to develop technologies that make the privacy/security tradeoff of surveillance technologies less bad?
Trying to reform our existing, free societies to make them generally stronger, more prosperous, and more resilient against abuses?
Just maintaining CONSTANT VIGILANCE against bad political actors, and trying not to vote them into power?
Doing more EA-style research into the nature of stable totalitarianism, to map out the risks?
I would be interested in hearing more from Caplan about “stable totalitarianism”, but not if it’s just going to be a retread of the abstract concept that stable totalitarianism seems bad, what if Stalin had lived forever, etc. Some questions I’d be interested in:
Is Caplan worried by any of the specific technologies that might make stable totalitarianism more likely? Besides life-extension medical technology, we also have things like:
LLM-powered censorship
other AI-powered surveillance tech like facial recognition and gait tracking, or potential AI-powered advancements in lie detection or persuasion/propaganda
recent progress in literal mind-reading via MRI, and other neurotech
“social credit” systems, “central bank digital currencies”, etc, that could be abused to surveil and control people’s financial lives
Does Caplan see more risk of stable totalitarianism arising from:
The existing dominant international alliance (USA + Europe + Japan + etc) kind of drifting more and more into a situation of global control and totalitarianism over time? (slowly implementing more things like CBDCs and pervasive censorship, perhaps spurred by the legitimate desire to control dangerous technologies like AI and biotech!)
Some individual powerful country (like China, or the USA after a very bad election) going totalitarian and then somehow extending that system to the rest of the world?
Some relatively small country, which doesn’t currently have lots of military power or influence, just innovates a new type of government which is very oppressive but nonetheless economically outcompetes liberal democracy in the long run? (It’s my impression that this is kind of how fascism seemed in WW2, and communism in the early cold war… in the beginning, people were legitimately worried that these totalitarian systems might just be more productive ways of running an economy, even though they were antithetical to human freedom. In the same way that modern China’s government more capable and better-organized than a simple strongman dictatorship, is Caplan worried that it’s possible to somehow bolt together prediction markets + social credit systems + corporate best practices, or whatever, and outcompete the democracies?)
What does Caplan think we can do to make totalitarianism less likely?
Preventative bans / regulation of specifically worrying technologies?
Trying to deliberately develop OTHER technologies which make totalitarianism harder (like, idk, the internet or bitcoin or etc), or to develop technologies that make the privacy/security tradeoff of surveillance technologies less bad?
Trying to reform our existing, free societies to make them generally stronger, more prosperous, and more resilient against abuses?
Just maintaining CONSTANT VIGILANCE against bad political actors, and trying not to vote them into power?
Doing more EA-style research into the nature of stable totalitarianism, to map out the risks?