The Simple Macroeconomics of AI is a 2024 working paper by Daron Acemoglu which models the economic growth effects of AI and predicts them to be small: About a .06% increase in TFP growth annually. This stands in contrast to many predictions which forecast immense impacts on economic growth from AI, including many from other academic economists. Why does Acemoglu come to such a different conclusion than his colleagues and who is right?
First, Acemoglu divides up the ways AI could affect productivity into four channels:
1. AI enables further (extensive-margin) automation.
Obvious examples of this type of automation include generative AI tools such as large language models taking over simple writing, translation and classification.2. AI can generate new task complementarities, raising the productivity of labor in tasks it is performing.
For example, AI could provide better information to workers, directly increasing their productivity. Alternatively, AI could automate some subtasks (such as providing readymade subroutines to computer programmers) and simultaneously enable humans to specialize in other subtasks, where their performance improves.3. AI could induce deepening of automation—meaning improving performance, or reducing costs, in some previously capital-intensive tasks.
Examples include IT security, automated control of inventories, and better automated quality control4. AI can generate new labor-intensive products or tasks.
Each of these four channels is referring to specific mechanism in his task-based model of production.
Automation raises the threshold of tasks which are performed by capital instead of labor
Complementarities raises labor productivity in non-automated tasks
Deepening of automation raises capital productivity in already-automated tasks
New tasks are extra production steps that only labor can perform in the economy, for example, the automation of computers leads to programming as a new task.
The chief sin of this paper is dismissing the latter half of these mechanisms without good arguments or evidence.
“Deepening automation” in Acemoglu’s model means increasing the efficiency of tasks already performed by machines. This raises output but doesn’t change the distribution of tasks assigned to humans vs machines. AI might deepen automation by creating new algorithms that improve Google’s search results on a fixed compute budget or replacing expensive quality control machinery with vision-based machine learning, for example.
This kind of productivity improvement can have huge growth effects. The second industrial revolution was mostly “deepening automation” growth. Electricity, machine tools, and Bessemer steel improved already automated processes, leading to the fastest rate of economic growth the US has ever seen. In addition, this deepening automation always increase wages in Acemoglu’s model, in contrast to the possibility of negative wage effects from the extensive margin automation that he focuses on.
So why does Acemoglu ignore this channel?
I do not dwell on deepening of automation because the tasks impacted by (generative) AI are quite different than those automated by the previous wave of digital technologies, such as robotics, advanced manufacturing equipment and software systems.
This single sentence is the only justification he gives for omitting capital productivity improvements from his analysis. A charitable interpretation of this argument acknowledges that he is only referring to “(generative) AI”, like ChatGPT and Midjourney. These tools do seem more focused on augmenting human labor rather than doing what software can already do, but more efficiently. Though Acemoglu is happy to drop the “generative” qualifier everywhere else in his paper.
The more important point is that the consumer-facing “generative AI” products that Acemoglu is referring to are just wrappers around the more general technology of transformers. Transformers are already being used to train robots, operate self driving cars, and improve credit card fraud detection. All examples of increasing the productivity of tasks already performed by machines.
It is easier to get small productivity effects and ambiguous wage effects from AI if you assume that it will have zero impact on capital productivity.
Potential economic gains from new tasks aren’t included in Acemoglu’s headline estimation of AI’s productivity impact either. This is strange since he has written a previous paper studying the creation of new tasks and their growth implications in the exact same model. In this paper he acknowledges that:
The wage and productivity impact of new tasks can be potentially larger than cost savings in existing tasks, and this is particularly likely to be the case when new tasks improve the entire production process, or when they add new sources of cost improvements or complementary functions. Despite new tasks’ central role in wage and productivity growth and in reducing labor income inequality I will not focus on new good tasks generated by AI for the reasons discussed in detail in the Conclusion.
The justification he gives for ignoring this channel is weak.
If AI is used to create new tasks and products, these will also add to GDP and can boost productivity growth. Nevertheless, when we incorporate the possibility that new tasks generated by AI may be manipulative, the impact on welfare can be even smaller.
Instead of incorporating possible gains from new tasks, he only focuses on the “new bad tasks” that AI might create e.g producing misinformation and targeted ads. Based on some studies about harms from social media he concludes that while revenue from these “manipulative” tasks might raise GDP by 2%, they would actually lower welfare by -.72%.
There is zero argument or evidence given for why we should expect the harms from AI to be similar to those from social media or why we should expect new bad tasks to outnumber and outweigh new good ones. He doesn’t end up including gains or losses from new tasks in his final count of productivity effects, but this process of ignoring possible gains from new good tasks and making large empirical assumptions to get a negative effect from new bad tasks exemplifies a pattern of motivated reasoning that is repeated throughout the paper.
He also mentions a fifth possibility for AI to affect productivity: changing the process of science. Acemoglu does not even include this in his list of possible channels since “large-scale advances of this sort do not seem likely with in the 10-year time frame.” This channel probably has the largest potential effect, since the feedback loop between research inputs and outputs can quickly lead to singularities. But even more mild changes, such as raising the capital intensity of R&D, as more fields can make advances using compute-intensive deep learning, could double productivity growth rates. Dismissing all of this with a single sentence is not enough for a paper that claims insight over AI’s economic impact as a whole.
The biggest problem with this paper is claiming a wide scope over all of AI’s macroeconomic impacts and then immediately cutting out analysis of the most important channels of that impact. The analysis within the channels he does consider is also somewhat suspect.
Acemoglu’s estimation of the productivity effects from the “automation” channel is derived from a complicated task based production model but it leads to an equation for AI’s effects that is super simple: the change in TFP is the share of GDP from tasks affected by AI multiplied by the average cost savings in those tasks. The GDP share comes from Eloundou et al. (2023) which estimates that ~20% of tasks are “exposed” to AI combined with Svanberg et al (2024) which estimates that 23% of those exposed tasks can be profitably automated, so 4.6% of GDP is exposed. Then he combines results from these three papers that experimentally roll out AI to workers to get cost savings. These average out to around 30% productivity gains for labor which is about 50% of costs, so 15% total. Multiplying these gets his overall estimate that the “total factor productivity (TFP) effects within the next 10 years should be no more than 0.66% in total—or approximately a 0.064% increase in TFP growth annually.”
Multiplying these numbers out is a good starting point, and is certainly better than the vibes-based guesses that many base their AI predictions on. Still though, it seems pretty strange to me to take exposure estimates and experimental productivity results based on GPT 3.5 and 4 and assume that they will hold for the next 10 years. How many “AI exposure” estimates from 2018 would have included graphic design and creative writing? A few years later those industries are at the top of the list. The deluge of AI investment will push us further up the scaling laws graph so we should be expecting similar changes over the next several years.
If Acemoglu’s paper was titled “Cost Savings From Extensive-Margin AI Automation” it would be pretty good. It would take the most recent empirical estimations of AI’s impact and work them into a rich task-based production model and come to an interesting result at the end: the effects through this channel are smaller than what you might expect. The paper is titled “The Simple Macroeconomics of AI,” though, and it claims to answer far more questions than it actually does. I am confident that the predictions of small economic impact that Acemoglu makes in this paper will not hold up over the next 10 years.
I think EAs who consume economics research are accustomed to the challenges of interpreting applied microeconomic research: causal inference challenges and such. But I don’t think they are accustomed to interpreting structural models critically, which is going to become more of a problem as structural models of AI and economic growth become more common. The most common failure mode for interpreting structural research is to not recognize model-concept mismatch. It looks something like this:
Write a paper about <concept> that requires a structural model, e.g. thinking about how <concept> affects welfare in equilibrium.
Write a model in which <concept> is mathematized in a way that represents only <narrow interpretation>.
Derive conclusions that only follow from <narrow interpretation>, and then conclude that they apply to <concept>.
which is exactly what you’ve identified with Acemoglu’s paper.
Model-concept mismatch is endemic to both good and bad structural research. Models require specificity, but concepts are general, so they have to be shrunk in particular ways to be represented in a model, and some of those ways of representing them will be mutually exclusive and lead to different conclusions. But it means that whenever you read an abstract of a paper that says “we propose a general equilibrium model of <complicated concept>”, never take it at face value. You will almost always find that its interpretation of <complicated concept> is extremely narrow.
Good research a) picks reasonable ways to do that narrowing, and b) owns what it represents and what it does not represent. I think Acemoglu’s focus on automation is reasonable, because Acemoglu lives, breathes and dreams automation. It is his research agenda. It’s important. But he does not own what it represents and what it does not represent, and that’s bad.
Executive summary: Acemoglu’s 2024 paper predicting small economic growth effects from AI is flawed because it ignores or dismisses important channels through which AI could significantly boost productivity, and relies on assumptions that may not hold as AI capabilities advance.
Key points:
Acemoglu’s paper considers four channels for AI to affect productivity, but focuses primarily on extensive-margin automation while dismissing or ignoring the potential impacts of AI deepening automation, creating new tasks, and accelerating scientific progress.
The paper’s justifications for ignoring these other channels are weak or absent, suggesting motivated reasoning to downplay AI’s potential economic impact.
Acemoglu’s estimate of productivity gains from extensive-margin automation relies on exposure and cost-saving estimates based on current AI capabilities, which are likely to significantly increase over the paper’s 10-year time horizon.
The paper’s narrow focus and assumptions lead to an underestimation of AI’s potential to drive economic growth through multiple channels beyond just automating existing tasks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.