I think EAs who consume economics research are accustomed to the challenges of interpreting applied microeconomic research: causal inference challenges and such. But I don’t think they are accustomed to interpreting structural models critically, which is going to become more of a problem as structural models of AI and economic growth become more common. The most common failure mode for interpreting structural research is to not recognize model-concept mismatch. It looks something like this:
Write a paper about <concept> that requires a structural model, e.g. thinking about how <concept> affects welfare in equilibrium.
Write a model in which <concept> is mathematized in a way that represents only <narrow interpretation>.
Derive conclusions that only follow from <narrow interpretation>, and then conclude that they apply to <concept>.
which is exactly what you’ve identified with Acemoglu’s paper.
Model-concept mismatch is endemic to both good and bad structural research. Models require specificity, but concepts are general, so they have to be shrunk in particular ways to be represented in a model, and some of those ways of representing them will be mutually exclusive and lead to different conclusions. But it means that whenever you read an abstract of a paper that says “we propose a general equilibrium model of <complicated concept>”, never take it at face value. You will almost always find that its interpretation of <complicated concept> is extremely narrow.
Good research a) picks reasonable ways to do that narrowing, and b) owns what it represents and what it does not represent. I think Acemoglu’s focus on automation is reasonable, because Acemoglu lives, breathes and dreams automation. It is his research agenda. It’s important. But he does not own what it represents and what it does not represent, and that’s bad.
I think EAs who consume economics research are accustomed to the challenges of interpreting applied microeconomic research: causal inference challenges and such. But I don’t think they are accustomed to interpreting structural models critically, which is going to become more of a problem as structural models of AI and economic growth become more common. The most common failure mode for interpreting structural research is to not recognize model-concept mismatch. It looks something like this:
Write a paper about <concept> that requires a structural model, e.g. thinking about how <concept> affects welfare in equilibrium.
Write a model in which <concept> is mathematized in a way that represents only <narrow interpretation>.
Derive conclusions that only follow from <narrow interpretation>, and then conclude that they apply to <concept>.
which is exactly what you’ve identified with Acemoglu’s paper.
Model-concept mismatch is endemic to both good and bad structural research. Models require specificity, but concepts are general, so they have to be shrunk in particular ways to be represented in a model, and some of those ways of representing them will be mutually exclusive and lead to different conclusions. But it means that whenever you read an abstract of a paper that says “we propose a general equilibrium model of <complicated concept>”, never take it at face value. You will almost always find that its interpretation of <complicated concept> is extremely narrow.
Good research a) picks reasonable ways to do that narrowing, and b) owns what it represents and what it does not represent. I think Acemoglu’s focus on automation is reasonable, because Acemoglu lives, breathes and dreams automation. It is his research agenda. It’s important. But he does not own what it represents and what it does not represent, and that’s bad.