Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:
[I]n one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.
I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I don’t think this is unique to capitalism, or that things would be much better under some other economic system.
The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldn’t find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).
If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.
One intuitive argument for why capitalism should be expected to advance AI faster than competing economic systems is because capitalist institutions incentivize capital accumulation, and AI progress is mainly driven by the accumulation of computer capital.
This is a straightforward argument: traditionally it is widely considered that a core element of capitalist institutions is the ability to own physical capital, and receive income from this ownership. AI progress and AI-driven growth requires physical computer capital, both for training and for inference. Right now, all the major tech companies, including Microsoft, Meta and Google, are spending large sums to amass a stockpile of compute to train larger, more capable models and serve customers AI services via cloud APIs. The obvious reason why these companies are taking these actions is because they expect to profit from their ownership over AI capital.
While it’s true that competing economic systems also have mechanisms to accumulate capital, the capitalist system is practically synonymous with this motive. For example, while a centrally planned government could theoretically decide to spend 20% of GDP to purchase computer capital, the politicians and bureaucrats within such a system might only have weak incentives to pursue such a strategy, since they may not directly profit from the decision over and above the gains received by the general population. By contrast, a decentralized property and price system make such a decision extremely natural if one expects huge returns from investments in physical capital.
One can interpret this argument as a positive argument in favor of capitalist institutions (as I mostly do), or as an argument for reining in these institutions if you think that rapid AI progress is bad.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.
Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it’s interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it’s easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I’ve seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven’t gone deep into this yet. I’ll dig into the China links you sent later.
Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:
I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I don’t think this is unique to capitalism, or that things would be much better under some other economic system.
The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldn’t find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).
If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.
One intuitive argument for why capitalism should be expected to advance AI faster than competing economic systems is because capitalist institutions incentivize capital accumulation, and AI progress is mainly driven by the accumulation of computer capital.
This is a straightforward argument: traditionally it is widely considered that a core element of capitalist institutions is the ability to own physical capital, and receive income from this ownership. AI progress and AI-driven growth requires physical computer capital, both for training and for inference. Right now, all the major tech companies, including Microsoft, Meta and Google, are spending large sums to amass a stockpile of compute to train larger, more capable models and serve customers AI services via cloud APIs. The obvious reason why these companies are taking these actions is because they expect to profit from their ownership over AI capital.
While it’s true that competing economic systems also have mechanisms to accumulate capital, the capitalist system is practically synonymous with this motive. For example, while a centrally planned government could theoretically decide to spend 20% of GDP to purchase computer capital, the politicians and bureaucrats within such a system might only have weak incentives to pursue such a strategy, since they may not directly profit from the decision over and above the gains received by the general population. By contrast, a decentralized property and price system make such a decision extremely natural if one expects huge returns from investments in physical capital.
One can interpret this argument as a positive argument in favor of capitalist institutions (as I mostly do), or as an argument for reining in these institutions if you think that rapid AI progress is bad.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.
Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it’s interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it’s easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I’ve seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven’t gone deep into this yet. I’ll dig into the China links you sent later.