My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
OK but this key feature of not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive. I think this is in conflict with the normal understanding of what “automation” means! Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle, even if the additional efficiency means actual spending on a specific product goes from 0 to 1. And as long as we could produce a little of Good 2 pre-automation, the utility function in your model implies that the spending in the economy would eventually be dominated by Good 2 (and hence GDP growth rates would be set by the growth in productivity of Good 2) even without full automation (unless the ratio of Good-1 and Good-2 productivity is growing superexponentially in time).
What kind product would we be unable to produce without full automation, even given arbitrary time to grow? Off the top of my head I can only think of something really ad-hoc like “artisanal human paintings depicting the real-world otherwise-fully-autonomous economy”.
That’s basically what makes me think that “the answer is already in our utility function”, which we could productively introspect on, rather than some empirical uncertainty about what products full automation will introduce.
presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth).
I’m not sure what the best precise math statement to make here is, but I suspect that at least for “separable” utility functions of the form u(x1,…,xN)=∑nun(xn) you need either a dramatic difference in diminishing returns for the un (e.g., log vs. linear as in your model) or you need a super dramatic difference in the post-full-automation productivity growth curves (e.g., one grows exponentially and the other grows superexponentially) that is absent pre-automation. (I don’t think it’s enough that the productivities grow at different rates post-automation.) So I still think we can extract this from our utility function without knowing much about the future, although maybe there’s a concrete model that would show that’s wrong.
not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive… Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle
Okay, I’m happy to change the title to (a more concise version of) “the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth” if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don’t think we have additively separable utility, and I don’t know what you mean by “extracting this from our utility function”. But anyway, if I’m understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say u(x)=∑nmax(0,1−1/xn), a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I’ll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Oh yea, I didn’t mind the title at all (although I do think it’s usefully more precise now :)
Agreed on additively separable utility being unrealistic. My point (which wasn’t clearly spelled out) was not that GDP growth and unit production can’t look dramatically. (We already see that in individual products like transistors (>> GDP) and rain dances (<< GDP).) It was that post-full-automation isn’t crucially different than pre-full-automation unless you make some imo pretty extreme assumptions to distinguish them.
By “extracting this from our utility function”, I just mean my vague claim that, insofar as we are uncertain about GDP growth post-full-automation, understanding better the sorts of things people and superhuman intelligences want will reduce that uncertainty more than learning about the non-extreme features of future productivity heterogeneity (although both do matter if extreme enough). But I’m being so vague here that it’s hard to argue against.
Ok, fair enough—thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the “new-products-Baumol” mechanism illustrated here. I don’t think that’s so implausible, and hopefully the note I’ll write later will make it clearer where I’m coming from on that.
But this post isn’t aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn’t noticed that it’s even a theoretical possibility.
Here’s an example in which utility is additively separable, un(.) is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.
Thanks!
OK but this key feature of not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive. I think this is in conflict with the normal understanding of what “automation” means! Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle, even if the additional efficiency means actual spending on a specific product goes from 0 to 1. And as long as we could produce a little of Good 2 pre-automation, the utility function in your model implies that the spending in the economy would eventually be dominated by Good 2 (and hence GDP growth rates would be set by the growth in productivity of Good 2) even without full automation (unless the ratio of Good-1 and Good-2 productivity is growing superexponentially in time).
What kind product would we be unable to produce without full automation, even given arbitrary time to grow? Off the top of my head I can only think of something really ad-hoc like “artisanal human paintings depicting the real-world otherwise-fully-autonomous economy”.
That’s basically what makes me think that “the answer is already in our utility function”, which we could productively introspect on, rather than some empirical uncertainty about what products full automation will introduce.
I’m not sure what the best precise math statement to make here is, but I suspect that at least for “separable” utility functions of the form u(x1,…,xN)=∑nun(xn) you need either a dramatic difference in diminishing returns for the un (e.g., log vs. linear as in your model) or you need a super dramatic difference in the post-full-automation productivity growth curves (e.g., one grows exponentially and the other grows superexponentially) that is absent pre-automation. (I don’t think it’s enough that the productivities grow at different rates post-automation.) So I still think we can extract this from our utility function without knowing much about the future, although maybe there’s a concrete model that would show that’s wrong.
Okay, I’m happy to change the title to (a more concise version of) “the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth” if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don’t think we have additively separable utility, and I don’t know what you mean by “extracting this from our utility function”. But anyway, if I’m understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say u(x)=∑nmax(0,1−1/xn), a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I’ll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Oh yea, I didn’t mind the title at all (although I do think it’s usefully more precise now :)
Agreed on additively separable utility being unrealistic. My point (which wasn’t clearly spelled out) was not that GDP growth and unit production can’t look dramatically. (We already see that in individual products like transistors (>> GDP) and rain dances (<< GDP).) It was that post-full-automation isn’t crucially different than pre-full-automation unless you make some imo pretty extreme assumptions to distinguish them.
By “extracting this from our utility function”, I just mean my vague claim that, insofar as we are uncertain about GDP growth post-full-automation, understanding better the sorts of things people and superhuman intelligences want will reduce that uncertainty more than learning about the non-extreme features of future productivity heterogeneity (although both do matter if extreme enough). But I’m being so vague here that it’s hard to argue against.
Ok, fair enough—thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the “new-products-Baumol” mechanism illustrated here. I don’t think that’s so implausible, and hopefully the note I’ll write later will make it clearer where I’m coming from on that.
But this post isn’t aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn’t noticed that it’s even a theoretical possibility.
Here’s an example in which utility is additively separable, un(.) is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.