Thanks for the post. I generally agree with your arguments but thought I should respond as someone currently doing research on a non-alignment problem. While I want a global pause, I have no idea what I personally can do to help achieve that. Whereas I at least have some idea of actions I can take that might help reduce the “massive increase in inequality/power concentration” problem.
“Solve philosophy” is not the same thing as “implement the correct philosophy”, and we need the AI to bridge that gap. There is a near-consensus among moral philosophers that factory farming is wrong, yet it persists.
This is a great point and I just wanted to call it out. I do think research is most likely to make a difference when it is produced with some thought about implementation—i.e. who the relevant audience is, how to get it to them, whether the actions you are recommending they take are actually within their power, etc.
Good interview. On why there is such vast disagreement about AI’s potential economic impacts—I do think Ajeya’s hypothesis about different base rates has merit, and I also agree that many economists are simply not entertaining the premise where AI actually can beat the top humans at all cognitive tasks.
But I think one grossly underrated reason why economists and AI futurists talk past each other on this point is that most futurists don’t understand what “economic growth” and “GDP” actually measure. Presumably, when futurists talk about “economic growth”, they use it as a shorthand to refer to real output rather than nominal output, since the latter can be increased by simply printing money. However, we don’t have ways to directly measure real economic growth — rather, all our measures are based on market transactions at nominal prices, and then we apply some “deflator” to account for inflation. Some adjustments are also attempted to account for changes in quality and consumer tastes (i.e. the consumption basket today is vastly different from the consumption basket 100 years ago), but this is a very messy and imperfect process.
One of the reasons why GDP is so bad at capturing technological changes is that productivity improvements are, all else equal, deflationary. Consider AI-generated video and art—there is ~infinitely more of this today than there was 10 years ago, but because most of that is generated at such low costs, the amount that that video and art contribute to GDP is trivial.
At the same time, AI will not eliminate positional and status goods. So even if AI eliminates all physical scarcity, positional goods are likely to then take up more and more of the overall “economy”. This suggests that the correlation between GDP and the well-being metrics that people actually care about will weaken, if not decouple entirely.
I suspect much of the gap between mainstream economists and AI futurists would shrink (though not entirely vanish) if futurists just stopped referring to “economic growth” and instead referred to how they expect physical output or energy usage to change over time. Both the Agricultural and Industrial Revolutions can be explained just as well, if not better, with energy usage as with GDP. I think the same will be true of TAI. Most economists would readily concede that AI could increase energy usage by 1000x or more if AI managed to solve nuclear fusion (for example) - they’d just be (rightly) sceptical of any suggestion that that translates to a 1000x increase in measured GDP.