The Baumol point is that among a set of already existing goods which we don’t see as very substitutable, GDP growth can be pulled down arbitrarily by the slow-growing goods. …The point I’m making here is that even if we fully automate production, and even if the quantity of every good existing today then grows arbitrarily quickly, we might create new goods as well. Once we do so, if the production of old goods grows quickly while our production of the new goods doesn’t, GDP growth may be slow.
Not sure this is disagreement per se, but I think the surprising behavior of GDP in your model is almost entirely due to the shape of the utility function and doesn’t have much to do with either (1) the distinction between existing vs new products or (2) automation. In other words, I think this is still basically Baumol, although I admit to a large extent I’m just arguing here about preferred conceptual framing rather than facts.
Consider modifying your models as follows (which I presume makes it more like a traditional Baumol model):
There is only one time period
Good 2 is always available
Productivity of Good 1 grows at rates r1 and productivity of Good 2 grows at the much slower rate r2≪r1. Specifically, the production possibility frontier at time t is f(x1,x2)=1, under constraints x1,x2≥0, where f(x1,x2):=x1e−r1t+x2e−r2t.
Using your same utility function u(x1,x2)=log(x1)+x2, production is fully devoted to Good 1 for all times t<0, and during that time GDP is growing at r1. Then at time t=0, it becomes worthwhile to start producing Good 2. For t>0, the productivity growth rate of Good 1 remains much higher than the that of Good 2 (r1≫r2), and indeed the number of units of Good 1 produced grows exponentially faster than Good 2: x1(t)=e(r1−r2)t , x2(t)=er2t−1 Nonetheless, the marginal value of Good 1 plummets due to the log in the utility function. Specifically, the relative price of Good 1 falls exponentially in time, p1(t)/p2(t)=e−(r1−r2)t where pi(t):=dfdxi(t), as does Good 1′s price-weighted fraction of production:
p1(t)x1(t)p1(t)x1(t)+p2(t)x2(t)=e−r2t
GDP growth falls from r1 and exponentially asymptotes to r2 for large t.
Two points on the model:
Although production of Good 2 was 0 for t<0, this isn’t because it was unavailable or impossibly expensive. Indeed, Good 1 is getting easier to produce relative to Good 2 for all times, so in some sense Good 2 is (relatively) easiest to produce in the distant past. Rather, no units of Good 2 are produced for t<0 because they just aren’t valued according to the chosen utility function.
The transition from the all-Good-1-economy (t<0) to the mostly-Good-2-economy (t>0) is due to hitting the key point in the utility curve, not to any changes in productivity growth rates (which remain constant). You can throw in automation (i.e., a productivity shock) at any point, say, increasing both r1 and r2 by a constant factor γ, and you’ll still have a net fall in GDP growth rates so long as γr2<r1.
Ok, so then what are the take-aways for AI? By cleanly separating the utility-function effect from shocks to productivity, I think this is reason for us to believe that the past is a reasonable guide to the future. Yes, there could be weird kinks in our utility function, but in terms of revealing kinks there’s not much reason to think that AI-induced productivity gains will be importantly different than productivity gains from the past.
What quantity should we measure if not GDP?
First, we might consider this: Take the yearly output of today’s economy (2B tons of steel, etc.) and, for each future date, divide the future GDP by the cost (at that time) of producing today’s basket of goods. But this doesn’t work: this growth rate will, for large times, be dominated by growth rate of the good in the today’s basket whose production is growing slowest in the future. (In our model, that would be Good 2 growing at rate r2.) So it’s still Baumol vulnerable.
Then, we might consider: compute the value of all the good produced at a future date using today’s prices. For a single year this is just real GDP, but isn’t the same thing as real GDP for more than 1 year because it isn’t chained. And lack of chaining is why this quantity is bad: if you have a niche good whose production is skyrocketing, this growth rate would explode even if price was similarly falling and nothing else amazing was happening in the economy. For example, if 1 transistor cost ~$1k in 1949, and we now produce almost a sextillion (~1021 ) per year, that would value today’s economy at about an octillion (1024) dollars. In our model, this would be Good 1 growing at rate r1, but this isn’t what we want.
I think there’s just no getting around the fact that the kind of growth we care about is unavoidably wrapped up in our utility function. But as long as some fraction of people want to build Jupiter brains and explore Andromeda enough that they don’t devote ~all their efforts to goals that are intrinsically upper bounded, I expect AGI to lead to rapid real GDP growth (although it does likely eventually end with light-speed limits or whatever).
If growth were slow post-singularity, I think that would imply something pretty weird about human utility in this universe (or rather, the utility of the beings controlling the economy). There could of course still be crazy things happening like wild increases in energy usage at the same time, but this isn’t too different than how wild the existence of nanometer-scale transistors are relative to pre-industrial civilization. If you care about those crazy things independent of GDP (which is a measure of how fast the world overall is getting what it wants), you should probably just measure them directly, e.g., energy usage, planets colonized, etc.
If we make that one tiny tweak and say that good 2 was out there to be made all along, just too expensive to be demanded, then yes, it’s the same! That was the goal of the example: to introduce a Baumol-like effect in a way so similar to how everyone agrees Baumol effects have played out historically that it’s really easy to see what’s going on.
I’m happy to say it’s basically Baumol. The distinction I think is worth emphasizing here is that, when people say “Baumol effects could slow down the growth effects of AI”, they are usually—I think always, in my experience—pointing to the fact that if
AI dramatically speeds productivity growth on most goods but leaves some that only humans can produce (say, artisanal handicrafts), and
consumers see those goods as not very substitutable for the goods we’re getting way more productive at,
then GDP growth won’t speed up much. This then invites the response that, when we look around, there doesn’t seem to be any human-only good strongly satisfying (1) and (2). My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
On whether there is any reason to expect the productivity acceleration to coincide with the “kink in the utility function”
Here I think I disagree with you more substantively, though maybe the disagreement stems from the small framing point above.
If indeed “good 2” were always out there, just waiting for its price to fall, and if a technology were coming that would just replace all our existing workers, factory parts, and innovators with versions that operate more quickly in equal proportion—so that we move along the same paths of technology and the quantity of each good produced, but more quickly—then I agree that the past would be a good guide to the future, and GDP growth would reliably rise a lot. The only way it wouldn’t would be if the goods we were just about to start producing anyway were like good 2, featuring less steeply diminishing marginal utility but way slower productivity growth, so that the rise in productivity growth across the board coincidentally turned up at the same time as the “utility function kink”.
But if the technological advances that are allowing us to automate the production of everything people would ever be able to produce without the advances are also what allow for the invention of goods like good 2, it wouldn’t be a coincidence. I.e. presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth). The idea that we’re soon producing very different products than we otherwise ever would have, whose productivity is growing at very different rates, seems all the more likely to me when we remember that even at 30% growth we’re soon in an economy several orders of magnitude bigger: the kink just needs to show up somewhere, not anywhere near the current margin.
To reiterate what I noted at the beginning though, I’d be surprised if the ambiguous second effect single-handedly outweighed the unambiguously positive first effect. And it could just as well amplify it, if “good 2” exhibits faster than average productivity growth.
My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
OK but this key feature of not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive. I think this is in conflict with the normal understanding of what “automation” means! Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle, even if the additional efficiency means actual spending on a specific product goes from 0 to 1. And as long as we could produce a little of Good 2 pre-automation, the utility function in your model implies that the spending in the economy would eventually be dominated by Good 2 (and hence GDP growth rates would be set by the growth in productivity of Good 2) even without full automation (unless the ratio of Good-1 and Good-2 productivity is growing superexponentially in time).
What kind product would we be unable to produce without full automation, even given arbitrary time to grow? Off the top of my head I can only think of something really ad-hoc like “artisanal human paintings depicting the real-world otherwise-fully-autonomous economy”.
That’s basically what makes me think that “the answer is already in our utility function”, which we could productively introspect on, rather than some empirical uncertainty about what products full automation will introduce.
presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth).
I’m not sure what the best precise math statement to make here is, but I suspect that at least for “separable” utility functions of the form u(x1,…,xN)=∑nun(xn) you need either a dramatic difference in diminishing returns for the un (e.g., log vs. linear as in your model) or you need a super dramatic difference in the post-full-automation productivity growth curves (e.g., one grows exponentially and the other grows superexponentially) that is absent pre-automation. (I don’t think it’s enough that the productivities grow at different rates post-automation.) So I still think we can extract this from our utility function without knowing much about the future, although maybe there’s a concrete model that would show that’s wrong.
not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive… Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle
Okay, I’m happy to change the title to (a more concise version of) “the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth” if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don’t think we have additively separable utility, and I don’t know what you mean by “extracting this from our utility function”. But anyway, if I’m understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say u(x)=∑nmax(0,1−1/xn), a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I’ll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Oh yea, I didn’t mind the title at all (although I do think it’s usefully more precise now :)
Agreed on additively separable utility being unrealistic. My point (which wasn’t clearly spelled out) was not that GDP growth and unit production can’t look dramatically. (We already see that in individual products like transistors (>> GDP) and rain dances (<< GDP).) It was that post-full-automation isn’t crucially different than pre-full-automation unless you make some imo pretty extreme assumptions to distinguish them.
By “extracting this from our utility function”, I just mean my vague claim that, insofar as we are uncertain about GDP growth post-full-automation, understanding better the sorts of things people and superhuman intelligences want will reduce that uncertainty more than learning about the non-extreme features of future productivity heterogeneity (although both do matter if extreme enough). But I’m being so vague here that it’s hard to argue against.
Ok, fair enough—thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the “new-products-Baumol” mechanism illustrated here. I don’t think that’s so implausible, and hopefully the note I’ll write later will make it clearer where I’m coming from on that.
But this post isn’t aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn’t noticed that it’s even a theoretical possibility.
Here’s an example in which utility is additively separable, un(.) is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.
Not sure this is disagreement per se, but I think the surprising behavior of GDP in your model is almost entirely due to the shape of the utility function and doesn’t have much to do with either (1) the distinction between existing vs new products or (2) automation. In other words, I think this is still basically Baumol, although I admit to a large extent I’m just arguing here about preferred conceptual framing rather than facts.
Consider modifying your models as follows (which I presume makes it more like a traditional Baumol model):
There is only one time period
Good 2 is always available
Productivity of Good 1 grows at rates r1 and productivity of Good 2 grows at the much slower rate r2≪r1. Specifically, the production possibility frontier at time t is f(x1,x2)=1, under constraints x1,x2≥0, where f(x1,x2):=x1e−r1t+x2e−r2t.
Using your same utility function u(x1,x2)=log(x1)+x2, production is fully devoted to Good 1 for all times t<0, and during that time GDP is growing at r1. Then at time t=0, it becomes worthwhile to start producing Good 2. For t>0, the productivity growth rate of Good 1 remains much higher than the that of Good 2 (r1≫r2), and indeed the number of units of Good 1 produced grows exponentially faster than Good 2:
x1(t)=e(r1−r2)t , x2(t)=er2t−1
Nonetheless, the marginal value of Good 1 plummets due to the log in the utility function. Specifically, the relative price of Good 1 falls exponentially in time, p1(t)/p2(t)=e−(r1−r2)t where pi(t):=dfdxi(t), as does Good 1′s price-weighted fraction of production:
p1(t)x1(t)p1(t)x1(t)+p2(t)x2(t)=e−r2t
GDP growth falls from r1 and exponentially asymptotes to r2 for large t.
Two points on the model:
Although production of Good 2 was 0 for t<0, this isn’t because it was unavailable or impossibly expensive. Indeed, Good 1 is getting easier to produce relative to Good 2 for all times, so in some sense Good 2 is (relatively) easiest to produce in the distant past. Rather, no units of Good 2 are produced for t<0 because they just aren’t valued according to the chosen utility function.
The transition from the all-Good-1-economy (t<0) to the mostly-Good-2-economy (t>0) is due to hitting the key point in the utility curve, not to any changes in productivity growth rates (which remain constant). You can throw in automation (i.e., a productivity shock) at any point, say, increasing both r1 and r2 by a constant factor γ, and you’ll still have a net fall in GDP growth rates so long as γr2<r1.
Ok, so then what are the take-aways for AI? By cleanly separating the utility-function effect from shocks to productivity, I think this is reason for us to believe that the past is a reasonable guide to the future. Yes, there could be weird kinks in our utility function, but in terms of revealing kinks there’s not much reason to think that AI-induced productivity gains will be importantly different than productivity gains from the past.
What quantity should we measure if not GDP?
First, we might consider this: Take the yearly output of today’s economy (2B tons of steel, etc.) and, for each future date, divide the future GDP by the cost (at that time) of producing today’s basket of goods. But this doesn’t work: this growth rate will, for large times, be dominated by growth rate of the good in the today’s basket whose production is growing slowest in the future. (In our model, that would be Good 2 growing at rate r2.) So it’s still Baumol vulnerable.
Then, we might consider: compute the value of all the good produced at a future date using today’s prices. For a single year this is just real GDP, but isn’t the same thing as real GDP for more than 1 year because it isn’t chained. And lack of chaining is why this quantity is bad: if you have a niche good whose production is skyrocketing, this growth rate would explode even if price was similarly falling and nothing else amazing was happening in the economy. For example, if 1 transistor cost ~$1k in 1949, and we now produce almost a sextillion (~1021 ) per year, that would value today’s economy at about an octillion (1024) dollars. In our model, this would be Good 1 growing at rate r1, but this isn’t what we want.
I think there’s just no getting around the fact that the kind of growth we care about is unavoidably wrapped up in our utility function. But as long as some fraction of people want to build Jupiter brains and explore Andromeda enough that they don’t devote ~all their efforts to goals that are intrinsically upper bounded, I expect AGI to lead to rapid real GDP growth (although it does likely eventually end with light-speed limits or whatever).
If growth were slow post-singularity, I think that would imply something pretty weird about human utility in this universe (or rather, the utility of the beings controlling the economy). There could of course still be crazy things happening like wild increases in energy usage at the same time, but this isn’t too different than how wild the existence of nanometer-scale transistors are relative to pre-industrial civilization. If you care about those crazy things independent of GDP (which is a measure of how fast the world overall is getting what it wants), you should probably just measure them directly, e.g., energy usage, planets colonized, etc.
Hey Jess, thanks for the thoughtful comments.
On whether “this is still basically Baumol”
If we make that one tiny tweak and say that good 2 was out there to be made all along, just too expensive to be demanded, then yes, it’s the same! That was the goal of the example: to introduce a Baumol-like effect in a way so similar to how everyone agrees Baumol effects have played out historically that it’s really easy to see what’s going on.
I’m happy to say it’s basically Baumol. The distinction I think is worth emphasizing here is that, when people say “Baumol effects could slow down the growth effects of AI”, they are usually—I think always, in my experience—pointing to the fact that if
AI dramatically speeds productivity growth on most goods but leaves some that only humans can produce (say, artisanal handicrafts), and
consumers see those goods as not very substitutable for the goods we’re getting way more productive at,
then GDP growth won’t speed up much. This then invites the response that, when we look around, there doesn’t seem to be any human-only good strongly satisfying (1) and (2). My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
On whether there is any reason to expect the productivity acceleration to coincide with the “kink in the utility function”
Here I think I disagree with you more substantively, though maybe the disagreement stems from the small framing point above.
If indeed “good 2” were always out there, just waiting for its price to fall, and if a technology were coming that would just replace all our existing workers, factory parts, and innovators with versions that operate more quickly in equal proportion—so that we move along the same paths of technology and the quantity of each good produced, but more quickly—then I agree that the past would be a good guide to the future, and GDP growth would reliably rise a lot. The only way it wouldn’t would be if the goods we were just about to start producing anyway were like good 2, featuring less steeply diminishing marginal utility but way slower productivity growth, so that the rise in productivity growth across the board coincidentally turned up at the same time as the “utility function kink”.
But if the technological advances that are allowing us to automate the production of everything people would ever be able to produce without the advances are also what allow for the invention of goods like good 2, it wouldn’t be a coincidence. I.e. presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth). The idea that we’re soon producing very different products than we otherwise ever would have, whose productivity is growing at very different rates, seems all the more likely to me when we remember that even at 30% growth we’re soon in an economy several orders of magnitude bigger: the kink just needs to show up somewhere, not anywhere near the current margin.
To reiterate what I noted at the beginning though, I’d be surprised if the ambiguous second effect single-handedly outweighed the unambiguously positive first effect. And it could just as well amplify it, if “good 2” exhibits faster than average productivity growth.
Thanks!
OK but this key feature of not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive. I think this is in conflict with the normal understanding of what “automation” means! Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle, even if the additional efficiency means actual spending on a specific product goes from 0 to 1. And as long as we could produce a little of Good 2 pre-automation, the utility function in your model implies that the spending in the economy would eventually be dominated by Good 2 (and hence GDP growth rates would be set by the growth in productivity of Good 2) even without full automation (unless the ratio of Good-1 and Good-2 productivity is growing superexponentially in time).
What kind product would we be unable to produce without full automation, even given arbitrary time to grow? Off the top of my head I can only think of something really ad-hoc like “artisanal human paintings depicting the real-world otherwise-fully-autonomous economy”.
That’s basically what makes me think that “the answer is already in our utility function”, which we could productively introspect on, rather than some empirical uncertainty about what products full automation will introduce.
I’m not sure what the best precise math statement to make here is, but I suspect that at least for “separable” utility functions of the form u(x1,…,xN)=∑nun(xn) you need either a dramatic difference in diminishing returns for the un (e.g., log vs. linear as in your model) or you need a super dramatic difference in the post-full-automation productivity growth curves (e.g., one grows exponentially and the other grows superexponentially) that is absent pre-automation. (I don’t think it’s enough that the productivities grow at different rates post-automation.) So I still think we can extract this from our utility function without knowing much about the future, although maybe there’s a concrete model that would show that’s wrong.
Okay, I’m happy to change the title to (a more concise version of) “the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth” if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don’t think we have additively separable utility, and I don’t know what you mean by “extracting this from our utility function”. But anyway, if I’m understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say u(x)=∑nmax(0,1−1/xn), a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I’ll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Oh yea, I didn’t mind the title at all (although I do think it’s usefully more precise now :)
Agreed on additively separable utility being unrealistic. My point (which wasn’t clearly spelled out) was not that GDP growth and unit production can’t look dramatically. (We already see that in individual products like transistors (>> GDP) and rain dances (<< GDP).) It was that post-full-automation isn’t crucially different than pre-full-automation unless you make some imo pretty extreme assumptions to distinguish them.
By “extracting this from our utility function”, I just mean my vague claim that, insofar as we are uncertain about GDP growth post-full-automation, understanding better the sorts of things people and superhuman intelligences want will reduce that uncertainty more than learning about the non-extreme features of future productivity heterogeneity (although both do matter if extreme enough). But I’m being so vague here that it’s hard to argue against.
Ok, fair enough—thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the “new-products-Baumol” mechanism illustrated here. I don’t think that’s so implausible, and hopefully the note I’ll write later will make it clearer where I’m coming from on that.
But this post isn’t aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn’t noticed that it’s even a theoretical possibility.
Here’s an example in which utility is additively separable, un(.) is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.