One intuitive argument for why capitalism should be expected to advance AI faster than competing economic systems is because capitalist institutions incentivize capital accumulation, and AI progress is mainly driven by the accumulation of computer capital.
This is a straightforward argument: traditionally it is widely considered that a core element of capitalist institutions is the ability to own physical capital, and receive income from this ownership. AI progress and AI-driven growth requires physical computer capital, both for training and for inference. Right now, all the major tech companies, including Microsoft, Meta and Google, are spending large sums to amass a stockpile of compute to train larger, more capable models and serve customers AI services via cloud APIs. The obvious reason why these companies are taking these actions is because they expect to profit from their ownership over AI capital.
While it’s true that competing economic systems also have mechanisms to accumulate capital, the capitalist system is practically synonymous with this motive. For example, while a centrally planned government could theoretically decide to spend 20% of GDP to purchase computer capital, the politicians and bureaucrats within such a system might only have weak incentives to pursue such a strategy, since they may not directly profit from the decision over and above the gains received by the general population. By contrast, a decentralized property and price system make such a decision extremely natural if one expects huge returns from investments in physical capital.
One can interpret this argument as a positive argument in favor of capitalist institutions (as I mostly do), or as an argument for reining in these institutions if you think that rapid AI progress is bad.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.
One intuitive argument for why capitalism should be expected to advance AI faster than competing economic systems is because capitalist institutions incentivize capital accumulation, and AI progress is mainly driven by the accumulation of computer capital.
This is a straightforward argument: traditionally it is widely considered that a core element of capitalist institutions is the ability to own physical capital, and receive income from this ownership. AI progress and AI-driven growth requires physical computer capital, both for training and for inference. Right now, all the major tech companies, including Microsoft, Meta and Google, are spending large sums to amass a stockpile of compute to train larger, more capable models and serve customers AI services via cloud APIs. The obvious reason why these companies are taking these actions is because they expect to profit from their ownership over AI capital.
While it’s true that competing economic systems also have mechanisms to accumulate capital, the capitalist system is practically synonymous with this motive. For example, while a centrally planned government could theoretically decide to spend 20% of GDP to purchase computer capital, the politicians and bureaucrats within such a system might only have weak incentives to pursue such a strategy, since they may not directly profit from the decision over and above the gains received by the general population. By contrast, a decentralized property and price system make such a decision extremely natural if one expects huge returns from investments in physical capital.
One can interpret this argument as a positive argument in favor of capitalist institutions (as I mostly do), or as an argument for reining in these institutions if you think that rapid AI progress is bad.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.