After following the Ukraine war closely for almost three years, I naturally also watch China’s potential for military expansionism. Whereas past leaders of China talked about “forceful if necessary” reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn’t how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country’s communist past, and is a conservative gambler: that is, he will take a gamble if the odds seem enough in his favor. Putin badly miscomputed his odds in Ukraine, but Russia’s GDP and population were 1.843 trillion and 145 million, versus 17.8 trillion and 1.4 billion for China. At the same time, Taiwan is much less populous than Ukraine and its would-be defenders in the USA/EU/Japan are not as strong naval powers as China (yet would have to operate over a longer range). Last but not least, China is the factory of the world―if they should decide they want to do world domination military-style, they can probably do that fairly well while simultaneously selling us vital goods at suddenly-inflated prices.
So when I hear China ramped up nuclear weapon production, I immediately think of it as a nod toward Taiwan. If we don’t want an invasion of Taiwan, what do we do? Liberals have a habit of magical thinking in military matters, talking of diplomacy, complaining about U.S. “war mongers”, and running protests with “No Nukes” signs. But the invasion of Taiwan has nothing to do with the U.S.; Xi simply *wants* Taiwan and has the power to take it. If he makes that decision, no words can stop him. So the Free World has no role to play here other than (1) to deter and (2) to optionally help out Taiwan if Xi invades anyway.
Not all deterrents are military, of course; China and USA will surely do huge economic damage to each other if China invades, and that is a deterrent. But I think China has the upper hand here in ways the USA can’t match. On paper, USA has more military spending, but for practical purposes it is the underdog in a war for Taiwan[1]. Moreover, President Xi surely noticed that all it took was a few comments from Putin about nuclear weapons to close off the possibility of a no-fly-zone in Ukraine, NATO troops on the ground, use of American weapons against Russian territory (for years), etc. So I think Xi can reasonably―and correctly―conclude that China wants Taiwan more than the USA wants to defend it. (To me at least, comments about how we can’t spend more than 4% of the defense budget on Ukraine “because we need to be ready to fight China” just shows how unserious the USA is about defending democracy.) Still, USA aiding Taiwan is certainly a risk for Jinping and I think we need to make that risk look as big and scary as possible.
All this is to say that warfighting isn’t the point―who knows if Trump would even bother. The point is to create a credible deterrent as part of efforts to stop the Free World from shrinking even further. If war comes, maybe we fight, maybe we don’t. But war is more likely whenever dictators think they are stronger than their victims.
I would like more EAs to think seriously about containment, democracy promotion and even epistemic defenses. For what good is it to make people more healthy and prosperous, if those people later end up in a dictatorship that conscripts them or their children to fight wars―including perhaps wars against democracies? (I’m thinking especially of India and the BJP party, here. And yes, it’s still good to help them despitee the risk, I’m just saying it’s not enough and we should have even broader horizons.)
Granted, maybe we can’t do anything. Maybe there’s no tractable and cost-effective thing in this space. There are probably neglected things―like, when the Ukraine war first started, I thought Bryan Caplan’s “make desertion fast” idea was good, and I wish somebody had looked at counterpropaganda operations that could’ve made the concept work. Still, I would like EAs to understand some things.
The risks of geopolitics have returned―basically, cold-war stuff.
EAs overly focus on x-risk and s-risk over catastrophic risk. Technically, c-risk is way less bad than x-risk, but it doesn’t feel less bad. c-risk is more emotionally resonant for people, and risk management tasks probably overlap a lot between the two, so it’s probably easier to connect with policymakers over c-risk than x-risk.
I haven’t heard EAs talk about “loss-of-influence” risk. One form of this would be AGI takeover: if AGIs are much faster, smarter and cheaper than us (whether they are controlled by humans or by themselves), a likely outcome is one in which normal humans have no control over what happens: either AGIs themselves or dictators with AGI armies make all the decisions. In this sense, it sure seems we are the hinge of history, as future humans may have no control, and past humans had an inadequate understanding of their world. But in this post I’m pointing to a more subtle loss of control, where the balance of power shifts toward dictatorships until they decide to invade democracies that are more and more distant from their sphere of influence. If global power shifts toward highly-censored strongman regimes, EAs’ influence could eventually wane to zero.
Not with a bang, but with a quadrillion tiny robots
The year is 2038. A large company has spent the last 18 years developing an additive nanofactory that can produce and scan almost any object at the atomic scale, using a supply of “element cartriges” which contain base elements and compounds that are easily broken into base elements (e.g. graphite, ammonia, silicon crystals, water, table salt). The company worked with a university to develop advanced algorithms, published freely in open-access scientific literature, for constructing virtually any molecule or molecular lattice from the elements in the cartrige, including proteins (though it is not optimized for this, as there are already other companies that specialize in bioengineering). Each factory can produce objects up to one cubic centimetre in size in a vacuum-sealed chamber, and printing something so large could require up to 2 or 3 weeks. The units also have a “3D scan” ability that builds an atomic-scale model of any outer surface by “feeling” it; this ability is also used during 3D printing to verify that the object isn’t moving during fabrication, or, if the object moves, to characterize and potentially correct the problem. The units, which supercede a simpler and less flexible model, have just gone on sale for $10 million apiece. Several universities and companies order one.
In 2040, a millionaire who loves nanotechnology wants to democratize the technology, dreaming of various benefits it could bring to the world. He hires a chip designer, a nanotechnology expert and a few grad students enthusiastic about the technology, and starts a small business that designs a USB-C powered Nanofab™ based on a royalty-free nanoconstructed silicon RISC-V chip and a Linux-based OS in flash storage, plus a custom ROM designed to help load the initial firmware (as electron patterns cannot be 3D printed). Its external dimensions are 7mm x 11 mm x 4 mm and it can build objects up to 7 mm x 3 mm x 2 mm in size; it is about the same speed as the original factory it was made from, and supports 3D scanning too. In particular, the factory is designed to make hand-assembled copies of itself, by producing and ejecting a sequence of 8 pieces over 7 days, which a person can snap together into a complete unit. One must place an element cartridge on top, which is twice the size of the factory itself; the empty cartridge is also designed to be constructed by the factory and it has a connector allowing it to be quickly filled from a larger cartridge.
The millionaire’s company is located in rented office space next to a university, from which it rents blocks of time on one of the university’s nanofactory units. Three months after completing and testing the first factory, the office is filled with over a thousand Nanofabs, all spawned from the first copy ever made. Employees tire of snapping together factories by hand, so they rent a house and hire low-wage employees to spend their days snapping together factories and cartridges for sale to the public, while the office remains devoted to technology development. Factories with cartridges initially sell for $75 each, and refills cost $30. The blueprints for the factory are free for noncommercial use, but the cartridges are patented, proprietary modules that must be purchased (along with raw elements) from the company. The company puts service booths in malls for selling Nanofabs™ and refilling cartridges, though it also sells everything online.
By 2052 the millionaire is a billionaire, and other companies spring up to sell competing cartridges, raw materials, and eventually, high-speed nanofactories. Nanofabs are soon used by millions of people and companies for printing a vast assortment of tiny devices, from skin-adhesive smartphones, to contact lenses that can record and transcribe video/audio of every moment of every day, to carbon-fiber “ringguns”, small untraceable handguns that attach to a pair of human fingers. Meanwhile, a field of “artificial life” emerges, illegal nanofabricated drugs are rampant, the presidents of China and/or Russia have long since given themselves absolute power (while maintaining a pretense that they haven’t), and research into AGI has started to bear fruit....
Question: how might these developments lead to the end of human civilization? Is it more likely to be destroyed by AGIs or humans? What if the original technology hadn’t been so open—might one group of humans or AGIs gain a supreme technological advantage over everyone else, or does this just delay the democratization process?
Do you see evidence from 2020 technology that such technology could be developed by 2038, with even a low probability?
Of course, even a longer development timeline could end with many of the same problems. But it seems likely that these problems are smaller-scale than those we would expect to see from misaligned artificial intelligence. We already have examples of countries where one or more of guns, surveillance, and drugs run rampant, and I don’t immediately see the connection to catastrophic risk.
It’s unclear to me whether nanotechnology really makes it much easier for humans to harm each other, or whether a superintelligent AI would become much more threatening with this technology than without it (especially since it would presumably be easy enough to build in a future advanced society, whether or not humans had built it first).
Your questions are good ones to ask, and similar to questions being asked about AI in many EA-affiliated research institutions. I’m not an expert in that space, but you might be interested in subscribing to the Alignment Newsletter if you aren’t already and want a good sample of the work being done.
After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that can’t build arbitrary nanostructures, but can still build an interesting variety of nanostructures.
Meanwhile I would be surprised if a superintelligent AGI emerged before 2050—though if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown — AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a “nanotech risks newsletter”, maybe it would teach me how nanotech is incredibly dangerous too.
I’ve been thinking that there is a “fallacious, yet reasonable as a default/fallback” way to choose moral circles based on the Anthropic principle, which is closely related to my article “The Putin Fallacy―Let’s Try It Out”. It’s based on the idea that consciousness is “real” (part of the territory, not the map), in the same sense that quarks are real but cars are not. In this view, we say: P-zombies may be possible, but if consciousness is real (part of the territory), then by the Anthropic principle we are not P-Zombies, since P-zombies by definition do not have real experiences. (To look at it another way, P-Zombies are intelligences that do not concentrate qualia or valence, so in a solar system with P-zombies, something that experiences qualia is as likely to be found alongside one proton as any other, and there are about 10^20 times more protons in the sun as there are in the minds of everyone on Zombie Earth combined.) I also think that real qualia/valence is the fundamental object of moral value (also reasonable IMO, for why should an object with no qualia and no valence have intrinsic worth?)
By the Anthropic principle, it is reasonable to assume that whatever we happen to be is somewhat typical among beings that have qualia/valence, and thus, among beings that have moral worth. By this reasoning, it is unlikely that the sum total |W| of all qualia/valence in the world is dramatically larger than the sum total |H| of all qualia/valence among humans, because if |W| >> |H|, you and I are unlikely to find ourselves in set H. I caution people that while reasonable, this view is necessarily uncertain and thus fallacious and morally hazardous if it is treated as a certainty. Yet if we are to allocate our resources in the absence of any scientific clarity about which animals have qualia/valence, I think we should take this idea into consideration.
P.S. given the election results, I hope more people are doing now the soul-searching we should’ve done in 2016. I proposed my intervention “Let’s Make the Truth Easier to Find” on EA Forum in March 2023. It’s necessarily a partial solution, but I’m very interested to know why EAs generally weren’t interested in it. I do encourage people to investigate for themselves why Mr. Post Truth himself has roughly the same popularity as the average Democrat―twice.
After following the Ukraine war closely for almost three years, I naturally also watch China’s potential for military expansionism. Whereas past leaders of China talked about “forceful if necessary” reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn’t how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country’s communist past, and is a conservative gambler: that is, he will take a gamble if the odds seem enough in his favor. Putin badly miscomputed his odds in Ukraine, but Russia’s GDP and population were 1.843 trillion and 145 million, versus 17.8 trillion and 1.4 billion for China. At the same time, Taiwan is much less populous than Ukraine and its would-be defenders in the USA/EU/Japan are not as strong naval powers as China (yet would have to operate over a longer range). Last but not least, China is the factory of the world―if they should decide they want to do world domination military-style, they can probably do that fairly well while simultaneously selling us vital goods at suddenly-inflated prices.
So when I hear China ramped up nuclear weapon production, I immediately think of it as a nod toward Taiwan. If we don’t want an invasion of Taiwan, what do we do? Liberals have a habit of magical thinking in military matters, talking of diplomacy, complaining about U.S. “war mongers”, and running protests with “No Nukes” signs. But the invasion of Taiwan has nothing to do with the U.S.; Xi simply *wants* Taiwan and has the power to take it. If he makes that decision, no words can stop him. So the Free World has no role to play here other than (1) to deter and (2) to optionally help out Taiwan if Xi invades anyway.
Not all deterrents are military, of course; China and USA will surely do huge economic damage to each other if China invades, and that is a deterrent. But I think China has the upper hand here in ways the USA can’t match. On paper, USA has more military spending, but for practical purposes it is the underdog in a war for Taiwan[1]. Moreover, President Xi surely noticed that all it took was a few comments from Putin about nuclear weapons to close off the possibility of a no-fly-zone in Ukraine, NATO troops on the ground, use of American weapons against Russian territory (for years), etc. So I think Xi can reasonably―and correctly―conclude that China wants Taiwan more than the USA wants to defend it. (To me at least, comments about how we can’t spend more than 4% of the defense budget on Ukraine “because we need to be ready to fight China” just shows how unserious the USA is about defending democracy.) Still, USA aiding Taiwan is certainly a risk for Jinping and I think we need to make that risk look as big and scary as possible.
All this is to say that warfighting isn’t the point―who knows if Trump would even bother. The point is to create a credible deterrent as part of efforts to stop the Free World from shrinking even further. If war comes, maybe we fight, maybe we don’t. But war is more likely whenever dictators think they are stronger than their victims.
I would like more EAs to think seriously about containment, democracy promotion and even epistemic defenses. For what good is it to make people more healthy and prosperous, if those people later end up in a dictatorship that conscripts them or their children to fight wars―including perhaps wars against democracies? (I’m thinking especially of India and the BJP party, here. And yes, it’s still good to help them despitee the risk, I’m just saying it’s not enough and we should have even broader horizons.)
Granted, maybe we can’t do anything. Maybe there’s no tractable and cost-effective thing in this space. There are probably neglected things―like, when the Ukraine war first started, I thought Bryan Caplan’s “make desertion fast” idea was good, and I wish somebody had looked at counterpropaganda operations that could’ve made the concept work. Still, I would like EAs to understand some things.
The risks of geopolitics have returned―basically, cold-war stuff.
EAs overly focus on x-risk and s-risk over catastrophic risk. Technically, c-risk is way less bad than x-risk, but it doesn’t feel less bad. c-risk is more emotionally resonant for people, and risk management tasks probably overlap a lot between the two, so it’s probably easier to connect with policymakers over c-risk than x-risk.
I haven’t heard EAs talk about “loss-of-influence” risk. One form of this would be AGI takeover: if AGIs are much faster, smarter and cheaper than us (whether they are controlled by humans or by themselves), a likely outcome is one in which normal humans have no control over what happens: either AGIs themselves or dictators with AGI armies make all the decisions. In this sense, it sure seems we are the hinge of history, as future humans may have no control, and past humans had an inadequate understanding of their world. But in this post I’m pointing to a more subtle loss of control, where the balance of power shifts toward dictatorships until they decide to invade democracies that are more and more distant from their sphere of influence. If global power shifts toward highly-censored strongman regimes, EAs’ influence could eventually wane to zero.
just my opinion, but this video raises some of the key points
Not with a bang, but with a quadrillion tiny robots
The year is 2038. A large company has spent the last 18 years developing an additive nanofactory that can produce and scan almost any object at the atomic scale, using a supply of “element cartriges” which contain base elements and compounds that are easily broken into base elements (e.g. graphite, ammonia, silicon crystals, water, table salt). The company worked with a university to develop advanced algorithms, published freely in open-access scientific literature, for constructing virtually any molecule or molecular lattice from the elements in the cartrige, including proteins (though it is not optimized for this, as there are already other companies that specialize in bioengineering). Each factory can produce objects up to one cubic centimetre in size in a vacuum-sealed chamber, and printing something so large could require up to 2 or 3 weeks. The units also have a “3D scan” ability that builds an atomic-scale model of any outer surface by “feeling” it; this ability is also used during 3D printing to verify that the object isn’t moving during fabrication, or, if the object moves, to characterize and potentially correct the problem. The units, which supercede a simpler and less flexible model, have just gone on sale for $10 million apiece. Several universities and companies order one.
In 2040, a millionaire who loves nanotechnology wants to democratize the technology, dreaming of various benefits it could bring to the world. He hires a chip designer, a nanotechnology expert and a few grad students enthusiastic about the technology, and starts a small business that designs a USB-C powered Nanofab™ based on a royalty-free nanoconstructed silicon RISC-V chip and a Linux-based OS in flash storage, plus a custom ROM designed to help load the initial firmware (as electron patterns cannot be 3D printed). Its external dimensions are 7mm x 11 mm x 4 mm and it can build objects up to 7 mm x 3 mm x 2 mm in size; it is about the same speed as the original factory it was made from, and supports 3D scanning too. In particular, the factory is designed to make hand-assembled copies of itself, by producing and ejecting a sequence of 8 pieces over 7 days, which a person can snap together into a complete unit. One must place an element cartridge on top, which is twice the size of the factory itself; the empty cartridge is also designed to be constructed by the factory and it has a connector allowing it to be quickly filled from a larger cartridge.
The millionaire’s company is located in rented office space next to a university, from which it rents blocks of time on one of the university’s nanofactory units. Three months after completing and testing the first factory, the office is filled with over a thousand Nanofabs, all spawned from the first copy ever made. Employees tire of snapping together factories by hand, so they rent a house and hire low-wage employees to spend their days snapping together factories and cartridges for sale to the public, while the office remains devoted to technology development. Factories with cartridges initially sell for $75 each, and refills cost $30. The blueprints for the factory are free for noncommercial use, but the cartridges are patented, proprietary modules that must be purchased (along with raw elements) from the company. The company puts service booths in malls for selling Nanofabs™ and refilling cartridges, though it also sells everything online.
By 2052 the millionaire is a billionaire, and other companies spring up to sell competing cartridges, raw materials, and eventually, high-speed nanofactories. Nanofabs are soon used by millions of people and companies for printing a vast assortment of tiny devices, from skin-adhesive smartphones, to contact lenses that can record and transcribe video/audio of every moment of every day, to carbon-fiber “ringguns”, small untraceable handguns that attach to a pair of human fingers. Meanwhile, a field of “artificial life” emerges, illegal nanofabricated drugs are rampant, the presidents of China and/or Russia have long since given themselves absolute power (while maintaining a pretense that they haven’t), and research into AGI has started to bear fruit....
Question: how might these developments lead to the end of human civilization? Is it more likely to be destroyed by AGIs or humans? What if the original technology hadn’t been so open—might one group of humans or AGIs gain a supreme technological advantage over everyone else, or does this just delay the democratization process?
Do you see evidence from 2020 technology that such technology could be developed by 2038, with even a low probability?
Of course, even a longer development timeline could end with many of the same problems. But it seems likely that these problems are smaller-scale than those we would expect to see from misaligned artificial intelligence. We already have examples of countries where one or more of guns, surveillance, and drugs run rampant, and I don’t immediately see the connection to catastrophic risk.
It’s unclear to me whether nanotechnology really makes it much easier for humans to harm each other, or whether a superintelligent AI would become much more threatening with this technology than without it (especially since it would presumably be easy enough to build in a future advanced society, whether or not humans had built it first).
Your questions are good ones to ask, and similar to questions being asked about AI in many EA-affiliated research institutions. I’m not an expert in that space, but you might be interested in subscribing to the Alignment Newsletter if you aren’t already and want a good sample of the work being done.
After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that can’t build arbitrary nanostructures, but can still build an interesting variety of nanostructures.
Meanwhile I would be surprised if a superintelligent AGI emerged before 2050—though if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown — AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a “nanotech risks newsletter”, maybe it would teach me how nanotech is incredibly dangerous too.
I’ve been thinking that there is a “fallacious, yet reasonable as a default/fallback” way to choose moral circles based on the Anthropic principle, which is closely related to my article “The Putin Fallacy―Let’s Try It Out”. It’s based on the idea that consciousness is “real” (part of the territory, not the map), in the same sense that quarks are real but cars are not. In this view, we say: P-zombies may be possible, but if consciousness is real (part of the territory), then by the Anthropic principle we are not P-Zombies, since P-zombies by definition do not have real experiences. (To look at it another way, P-Zombies are intelligences that do not concentrate qualia or valence, so in a solar system with P-zombies, something that experiences qualia is as likely to be found alongside one proton as any other, and there are about 10^20 times more protons in the sun as there are in the minds of everyone on Zombie Earth combined.) I also think that real qualia/valence is the fundamental object of moral value (also reasonable IMO, for why should an object with no qualia and no valence have intrinsic worth?)
By the Anthropic principle, it is reasonable to assume that whatever we happen to be is somewhat typical among beings that have qualia/valence, and thus, among beings that have moral worth. By this reasoning, it is unlikely that the sum total |W| of all qualia/valence in the world is dramatically larger than the sum total |H| of all qualia/valence among humans, because if |W| >> |H|, you and I are unlikely to find ourselves in set H. I caution people that while reasonable, this view is necessarily uncertain and thus fallacious and morally hazardous if it is treated as a certainty. Yet if we are to allocate our resources in the absence of any scientific clarity about which animals have qualia/valence, I think we should take this idea into consideration.
P.S. given the election results, I hope more people are doing now the soul-searching we should’ve done in 2016. I proposed my intervention “Let’s Make the Truth Easier to Find” on EA Forum in March 2023. It’s necessarily a partial solution, but I’m very interested to know why EAs generally weren’t interested in it. I do encourage people to investigate for themselves why Mr. Post Truth himself has roughly the same popularity as the average Democrat―twice.