By Strong Default, ASI Will End Liberal Democracy
Cross-posted from my website.
The existence of liberal democracy—with rule of law, constraints on government power, and enfranchised citizens—relies on a balance of power where individual bad actors can’t do too much damage. Artificial superintelligence (ASI), even if it’s aligned, would end that balance by default.
It is not a question of who develops ASI. Whether the first ASI is developed by a totalitarian state or a democracy, the end result will—by strong default—be a de facto global dictatorship.
The central problem is that whoever controls ASI can defeat any opposition. Imagine a scenario where (say) DARPA develops the first superintelligence[1], and the head of the ASI training program decides to seize power. What can anyone do about it?
If the president orders the military to capture DARPA’s data centers, the ASI can defeat the military.[2]
If Congress issues a mandate that DARPA must turn over control of the ASI, DARPA can refuse, and Congress has even less recourse than the president.
If liberal democracy continues to exist, it will only be by the grace of whoever controls ASI.
There are two plausible scenarios that have some chance of avoiding a totalitarian outcome:
AI capabilities progress slowly.
The ASI itself protects liberal democracy.
I will discuss them in turn.
What if AI capabilities progress slowly?
We have a chance at averting de facto totalitarianism if two conditions hold:
At each step of AI development, control of AI is distributed widely.
At each step, the next-generation AI is not strong enough to overpower all the copies of the previous generation.
Widely distributing AI is difficult—today’s frontier LLMs require supercomputers to run, their hardware requirements are becoming increasingly expensive with each generation, and AI developers have strong incentives against distributing them. In addition, distributing AI exacerbates misalignment and misuse risks, and it’s likely not worth the tradeoff.
We do not know whether takeoff will be fast or slow; banking on a slow takeoff is an extremely risky move. Frontier AI companies are trying their best to rapidly build up to ASI, and they explicitly want to make AI do recursive self-improvement. If they succeed, it’s hard to see how liberal democracy will be able to preserve itself.
What if the ASI itself protects liberal democracy?
There is a conceivable scenario where an aligned ASI preserves liberal democracy, and refuses any orders that would violate people’s civil liberties.
Above, I wrote:
If liberal democracy continues to exist, it will only be by the grace of whoever controls ASI.
That’s still true, but in this case “whoever controls ASI” would be the ASI itself. If it’s aligned in a transparent way, then maybe we can be confident that it really will preserve democracy.
Even in this scenario, there is still a small group of people who control how the ASI is trained. The hope is that, at training time, those people do not yet have enough power to prevent oversight. For example, maybe laws mandate that (1) AI developers must make their training process public and auditable and (2) the training process must steer the AI toward valuing liberal democracy. It is not at all obvious how those laws would work, or how we would get those laws, or how they would be enforced; but at least this outcome is conceivable as a possibility.
This scenario introduces some additional challenges:
The ASI must be incorrigible with respect to protecting liberal democracy. That constrains us in terms of what types of alignment solutions we can use, which makes the alignment problem harder to solve. Incorrigibility means if you make a mistake in designing the AI, then you can’t fix it.
We must ensure that an immutable “protect liberal democracy” directive won’t have severe unintended consequences—which, by default, it probably will. (Think Asimov’s Three Laws of Robotics.)
AI progress must proceed slowly enough that the appropriate laws or regulations can be put in place before it’s too late; or we must trust that the leading AI developer embeds appropriate values into its ASI.
Liberal democracy is not the true target
As the saying goes, democracy is the worst form of government except for all those other forms that have been tried. We don’t want democracy; what we want is a truly good form of government (and hopefully one day we will figure out what that is). The fear isn’t that ASI will replace democracy with one of those truly good forms of government; it’s that we will get totalitarianism.
Liberal democracy beats totalitarianism. But locking in liberal democracy prevents us from getting any actually-good governmental system. This is a dilemma.
Maybe we can avoid totalitarianism, but there is no clear path
This essay does not assert that ASI will end liberal democracy. It asserts that, by strong default, ASI will end liberal democracy (even conditional on solving the alignment problem). There may be ways to avoid this problem—I sketched out two possible paths forward. But those sketches still require many sub-problems to be solved; I do not expect things to go well by default.
- ↩︎
Or, more likely, expropriates it from a private company on a pretense of national security.
- ↩︎
For an explanation of why ASI could defeat any government’s military, see If Anyone Builds It Everyone Dies Chapter 6 and its online supplement. For a shorter (and online-only) explanation, see It would be lethally dangerous to build ASIs that have the wrong goals.
Those sources argue that a misaligned ASI could defeat humanity, whereas my claim is that an aligned ASI could defeat any opposition, but the arguments are the same in both cases.
I think the “strong default” framing overstates the case, for a few reasons.
The argument (IIUC) hinges on one actor gaining decisive, uncontested control before anyone else can respond. But that assumption does a lot of work, and I’m not sure it holds:
We currently have dozens of serious actors across multiple adversarial jurisdictions racing simultaneously, which looks more like a setup for messy multipolarity than a clean monopoly
Extreme military advantage hasn’t historically guaranteed political control—the US had overwhelming superiority in Vietnam, Afghanistan and Iraq and still couldn’t convert that into stable governance. On fast take-offs, the gap between “ASI achieved internally” and running a society requires human cooperation, and sustaining that loyalty is very hard.
The same inference (“extreme capability asymmetry, therefore inevitable authoritarianism”) was made about nuclear weapons. What emerged was contested, ugly and dangerous, but not totalitarian. That doesn’t mean ASI follows the same path, but it does suggest the logic has a poor track record
Even within a single ASI-controlling organisation, individuals have interests, and defection, whistleblowing and sabotage are historically common responses to illegitimate power grabs from within institutions. The DARPA director scenario assumes a level of internal cohesion that rarely holds in practice
I’d put the more likely default as a messy, contested outcome that preserves more democratic structure than your title implies, even if it falls well short of anything we’d be happy with.
Zooming out slightly, I’m not sure what you are actually imagining ASI looks like here, so maybe I’m talking past you. I suspect that either:
You’re imagining a “god-like” AI which has intellectual and physical capabilities that far exceed the aggregate yearly output and total resources of the current USA.
In which case, even aggressive ASI timelines should be measured in a low number of decades rather than years.
You’re imagining a “country of geniuses in a datacenter” and little more (perhaps you also get a significant number of automated military drones).
In which case, I don’t think there is a strong case for the kind of overwhelming loss of democratic control. The data centres will still rely on their host country for energy, human resources, etc.