Awhile back John Wentworth wrote the related essay What Do GDP Growth Curves Really Mean?, where he pointed out that you wouldn’t be able to tell that AI takeoff was boosting the economy just by looking at GDP growth data simply because of the way GDP is calculated (emphasis mine):
I sometimes hear arguments invoke the “god of straight lines”: historical real GDP growth has been incredibly smooth, for a long time, despite multiple huge shifts in technology and society. That’s pretty strong evidence that something is making that line very straight, and we should expect it to continue. In particular, I hear this given as an argument around AI takeoff—i.e. we should expect smooth/continuous progress rather than a sudden jump.
Personally, my inside view says a relatively sudden jump is much more likely, but I did consider this sort of outside-view argument to be a pretty strong piece of evidence in the other direction. Now, I think the smoothness of real GDP growth tells us basically-nothing about the smoothness of AI takeoff. Even after a hypothetical massive jump in AI, real GDP would still look smooth, because it would be calculated based on post-jump prices, and it seems pretty likely that there will be something which isn’t revolutionized by AI. At the very least, paintings by the old masters won’t be produced any more easily (though admittedly their prices could still drop pretty hard if there’s no humans around who want them any more). Whatever things don’t get much cheaper are the things which would dominate real GDP curves after a big AI jump.
More generally, the smoothness of real GDP curves does not actually mean that technology progresses smoothly. It just means that we’re constantly updating the calculations, in hindsight, to focus on whatever goods were not revolutionized. On the other hand, smooth real GDP curves do tell us something interesting: even after correcting for population growth, there’s been slow-but-steady growth in production of the goods which haven’t been revolutionized.
I do agree with your remark that
well-chosen economic indices might track “AI capabilities” in a sense more directly tied to the social and geopolitical implications of AI we actually care about for some purposes.[4] Badly chosen economic indices might not.
but for the GDP case I don’t actually have any good alternative suggestions, and am curious if others do.
Thanks for pointing me to that post! It’s getting at something very similar.
I should look through the comments there, but briefly, I don’t agree with his idea that
GDP at 1960 prices is basically the right GDP-esque metric to look at to get an idea of “how crazy we should expect the future to look”, from the perspective of someone today. After all, GDP at 1960 prices tells us how crazy today looks from the perspective of someone in the 1960′s.
If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).
Awhile back John Wentworth wrote the related essay What Do GDP Growth Curves Really Mean?, where he pointed out that you wouldn’t be able to tell that AI takeoff was boosting the economy just by looking at GDP growth data simply because of the way GDP is calculated (emphasis mine):
I do agree with your remark that
but for the GDP case I don’t actually have any good alternative suggestions, and am curious if others do.
Thanks for pointing me to that post! It’s getting at something very similar.
I should look through the comments there, but briefly, I don’t agree with his idea that
If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).