Chip Production Policy Won’t Matter as Much as You’d Think

tl;dr—If timelines are short, it’s too late, and if they are long (and if we don’t all die,) the way to win the “AI race” is to generate more benefit from AI, not control of chip production.

Addendum: In the discussion in the comments, Peter makes good points, but I conclude: “this is very much unclear, and I’d love to see a lot more explicit reasoning about the models for impact, and how the policy angles relate to the timelines and the underlying risks.”

Addendum 2.a: See conversation with @Erich_Grunewald 🔸 in the comments, where he made several important points that I think should materially change the conclusions—not enough to say that Chip Production policy will matter, but likely that chip export controls would.
Addendum 2.b: @Steven Byrnes has pointed out that the pressure differential from a Brita doesn’t work the way I udnerstood. I have not figured out what is happening, and am far more comfortable with the economics than with the physics—so I think the analogy about counter-pressure works regardless of the dynamics for Brita filters.

In AI policy, there’s a lot of focus on the speed frontier AI develops and becomes increasingly important for the economy, and creates substantial new risks of loss of control. There is also a lot of focus on the chips needed for training and running the frontier models, which involves industrial policy around who has the chips, and who can make them. This leads to a questionable narrative around the race for AGI, but even before we get to that question, there’s a simple question about the dynamics of the two dimensions.

If AI takeoff is fast, the question of where the chips will be located is already determined—policies for building fabs and energy production matters over the next decade, not before 2028. So if AI takeoff happens soon, and (neglected third dimension,) if control of the chips actually matters because the AI takeoff doesn’t kill us all, then running the race and prioritizing industrial policy over free trade doesn’t make sense, it’s too late to matter.

We’re living in a world where AI is going to have severe economic impacts, even if it doesn’t take off. And so for the rest of this discussion, let’s assume we’re in the lower half of the diagram.

And if the AI development is gradual—and by gradual, I mean the bearish predictions of an extra 1-5% annual GDP growth from AI by 2030, which could produce a durable economic advantage to the West over China, if it’s somehow kept here—then who makes the chips matters very little. We’ll see diffusion of the tech and deployment of the models everywhere long before differential GDP growth creates any kind of decisive advantage. In that world, we care a lot more about differential adoption of AI, not who owns the chips.

The stuff that creates growth, i.e. AI chips, which will increasingly be identical to what economic theory refers to as capital, will try hard to migrate to wherever it can do the most good economically. This is the free-market economic equivalent of gravitational attraction; it’s a fundamental force driven by human self-interest. Keeping it from happening in the short term might work, but it requires increasing effort and eventually indefinite amounts of effort to maintain in an open system.

And this is a fundamental issue with breaking markets. Policies can create incentives to change the relative price, but unless you stop all trade, markets will find increasingly desperate ways to equalize pressure. Tariffs can increase relative prices, but not stop movement of goods—unless they make it uneconomical to import the item.

Western countries could make all the chips, whether in Taiwan o in the US, but couldn’t fully keep chips from moving to where they are wanted any more than the US can keep everyone from buying Russian oil—they can only marginally change relative prices with increasingly severe sanctions and pressures. As economists often say, banning things is (usually) just imposing a large tax. And that means we can have a limited impact on where chips go. (Even location verification is inevitably going to be only partially effective, albeit effective at further marginally changing the costs.) So this prediction is just making the kind of obvious point that we’ll see smuggling as long as it’s insanely profitable to do it.

If you’ve ever overfilled a Brita pitcher, you’ll recognize the equivalent scenario—the filter creates pressure, but unless you can stop the flow completely, you’re just talking about slight changes to the relative level in the top and bottom section. Removing the filter would equalize pressure, but unless you can stop all flow between sections, you can’t keep all the water in the top section.

Aside: Gary Marcus continues to be partially right; LLMs can’t figure out how to generate this without hints, they can’t figure out that the filter pressure keeps the height unequal.

So where will the chips go? Wherever it’s most lucrative to put them, modulus costs.

Of course, increasing clean power production, baseload and storage, and getting prices much lower, is stupidly good policy regardless of AI. And for determining how lucrative data centers are, it is going to matter at least as much as where chips are made, and moved to. So even if we somehow cut off the best chips completely, and China infeasibly stays a generation behind the West on their chip production in the intermediate term, chips that are 25% as efficient are still better for at least inference as long as energy prices are less than 25% of what they are in the West. So in equilibrium they end up with more of the data centers unless the west keeps the difference in power prices and price differences from export controls.

So yes, data centers will migrate to where the overall costs of chips + power are cheap—but this isn’t the end of the story, because location of the data center really only matters for HFT. And in worlds where China has 100% of the chip production, and power that’s half as expensive, if OpenAI is willing to pay twice what Chinese companies will for the resulting compute, there will be lots of pressure to locate the datacenters in places where OpenAI is willing to pay for it. (Which probably isn’t inside of China, given the infosec risks.)

What matters in a (relatively near-term) equilibrium is who can get the most use out of the models, and I think policy discussion are mostly ignoring this point. If total compute remains limited, the highest-value applications will pay enough to saturate their needs, just like OpenAI and other Western firms are saturating NVIDIA’s production capacity today. If Chinese companies are allowed to automate their factories with GPT 6 agents and US companies cannot, they’ll pay to use the models, and reap the benefits of economic growth. And if Americans can’t pay for GPT-6 psychologists, they won’t pay to use the models, and won’t get the benefits of reduced mental healthcare costs—which seem kind of important as the world accelerates and joblessness for young males increases (with all the likely tragic implications.)

So for all the handwringing, in worlds where economic power drives differential advantage, the chip production location has limited impact—not none, but nothing like the decisive advantage that seems to be implied. Power production has additional but still limited impact. The end beneficiaries of the growth will be whoever uses the AI, albeit as slightly modified by the price differentials created by these policies. And that means the real race is about deregulation and economic growth uber alles—which the AI industry in the US has recently realized.

So in summary, the idea that the US will create a sustainable advantage from magically moving industrial production to the US (which itself is… hard) and then actually permitting power (which we seem not to be doing,) is mostly a fantasy. But this runs both ways; if the chips help US companies more than they help countries elsewhere, the growth created by the chips will be inevitably accrue here, and vice-versa. (And cutting off trade, much less internet access, so that you can keep the chips from being used for growth elsewhere is defeating the point—you’re destroying the thing needed for growth.)

Of course, this leads to increasingly dangerous attempts to race towards unrestricted capabilities imposing uninternalized externalities on the public—there needs to be some regulation. But it seems to me that chip restrictions, and all the attendant industrial policy promoting race dynamics are not the way.

Thanks to Sean O hEigeartaigh and Mathieu Putz for comments on an earlier version of this post.