I think the “strong default” framing overstates the case, for a few reasons.
The argument (IIUC) hinges on one actor gaining decisive, uncontested control before anyone else can respond. But that assumption does a lot of work, and I’m not sure it holds:
We currently have dozens of serious actors across multiple adversarial jurisdictions racing simultaneously, which looks more like a setup for messy multipolarity than a clean monopoly
Extreme military advantage hasn’t historically guaranteed political control—the US had overwhelming superiority in Vietnam, Afghanistan and Iraq and still couldn’t convert that into stable governance. On fast take-offs, the gap between “ASI achieved internally” and running a society requires human cooperation, and sustaining that loyalty is very hard.
The same inference (“extreme capability asymmetry, therefore inevitable authoritarianism”) was made about nuclear weapons. What emerged was contested, ugly and dangerous, but not totalitarian. That ofc doesn’t mean ASI follows the same path, but it’s worth thinking about whether you would have predicted that outcome in advance.
Even within a single ASI-controlling organisation, individuals have interests, and defection, whistleblowing and sabotage are historically common responses to illegitimate power grabs from within institutions. The DARPA director scenario assumes a level of internal cohesion that imo rarely holds in practice
I’d put the more likely default as a messy, contested outcome that preserves more democratic structure than your title implies, even if it falls well short of anything we’d be happy with.
Zooming out slightly, I’m not sure what you are actually imagining ASI looks like here, so maybe I’m talking past you. I suspect that either:
You’re imagining a “god-like” AI which has intellectual and physical capabilities that far exceed the aggregate yearly output and total resources of the current USA.
In which case, even aggressive ASI timelines should be measured in a low number of decades rather than years. (Edit: I should have said 5-15 years here, low number if decades make it sound like 30 years. I still think the general point on democratic societies having time to adapt stands)
You’re imagining a “country of geniuses in a datacenter” and little more (perhaps you also get a significant number of automated military drones).
In which case, I don’t think there is a strong case for the kind of overwhelming loss of democratic control. The data centres will still rely on their host country for energy, human resources, etc.
I think the “strong default” framing overstates the case, for a few reasons.
The argument (IIUC) hinges on one actor gaining decisive, uncontested control before anyone else can respond. But that assumption does a lot of work, and I’m not sure it holds:
We currently have dozens of serious actors across multiple adversarial jurisdictions racing simultaneously, which looks more like a setup for messy multipolarity than a clean monopoly
Extreme military advantage hasn’t historically guaranteed political control—the US had overwhelming superiority in Vietnam, Afghanistan and Iraq and still couldn’t convert that into stable governance. On fast take-offs, the gap between “ASI achieved internally” and running a society requires human cooperation, and sustaining that loyalty is very hard.
The same inference (“extreme capability asymmetry, therefore inevitable authoritarianism”) was made about nuclear weapons. What emerged was contested, ugly and dangerous, but not totalitarian. That ofc doesn’t mean ASI follows the same path, but it’s worth thinking about whether you would have predicted that outcome in advance.
Even within a single ASI-controlling organisation, individuals have interests, and defection, whistleblowing and sabotage are historically common responses to illegitimate power grabs from within institutions. The DARPA director scenario assumes a level of internal cohesion that imo rarely holds in practice
I’d put the more likely default as a messy, contested outcome that preserves more democratic structure than your title implies, even if it falls well short of anything we’d be happy with.
Zooming out slightly, I’m not sure what you are actually imagining ASI looks like here, so maybe I’m talking past you. I suspect that either:
You’re imagining a “god-like” AI which has intellectual and physical capabilities that far exceed the aggregate yearly output and total resources of the current USA.
In which case, even aggressive ASI timelines should be measured in a low number of decades rather than years. (Edit: I should have said 5-15 years here, low number if decades make it sound like 30 years. I still think the general point on democratic societies having time to adapt stands)
You’re imagining a “country of geniuses in a datacenter” and little more (perhaps you also get a significant number of automated military drones).
In which case, I don’t think there is a strong case for the kind of overwhelming loss of democratic control. The data centres will still rely on their host country for energy, human resources, etc.