Postdoc at the Digital Economy Lab, Stanford, and research affiliate at the Global Priorities Institute, Oxford. I’m slightly less ignorant about economic theory than about everything else.
trammell
No worries, good to hear!
Thanks! People have certainly argued at least since Marx that if the people owning the capital get all the income, that will affect the state. I think more recent/quantitative work on this, e.g. by Stiglitz, has generally focused on the effects of inequality in wealth or income, rather than the effects of inequality via a high capital share per se. But this isn’t my area at all—ask your favorite LLM : )
The reference point argument is also about consumption inequality rather than what gives rise to it. My guess would be that if we all really get radical life extension and a huge quantity of amazing goods and services, that will probably for most people outweigh whatever jealousy comes with the knowledge that others got more, but who knows.In any event, my guess would be that even if the marginal product of labor stays high or rises following full automation, most people’s incomes will eventually come not from wages, but from interest on whatever investments they have (even if they started small) or from redistribution. And full automation could well trigger so much redistribution that income inequality shrinks, since it will remove one motivation for letting income inequality remain high today, namely that unlike with robots, taxing productive people too much can discourage them from working as much.
Great to hear, thanks!
As for the prediction—fair enough. Just to clarify though, I’m worried that the example makes it look like we need growth in the new good(s) to get this weird slow GDP growth result, but that’s not true. In case that’s the impression you got, this example illustrates how we can have superexponential growth in every good but (arbitrarily slow) exponential growth in GDP.
Here’s an example in which utility is additively separable, is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.
Ok, fair enough—thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the “new-products-Baumol” mechanism illustrated here. I don’t think that’s so implausible, and hopefully the note I’ll write later will make it clearer where I’m coming from on that.
But this post isn’t aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn’t noticed that it’s even a theoretical possibility.
not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive… Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle
Okay, I’m happy to change the title to (a more concise version of) “the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth” if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don’t think we have additively separable utility, and I don’t know what you mean by “extracting this from our utility function”. But anyway, if I’m understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say , a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I’ll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Right, I’m just not taking a stand here.
I might be more pessimistic than you about wages on balance, since I would argue for the importance of the “reallocation of capital from labor-augmenting to non-labor-augmenting uses” point, which if strong enough could lower wages through a channel other than the DRS+PS one you focus on.
Hey Jess, thanks for the thoughtful comments.
On whether “this is still basically Baumol”
If we make that one tiny tweak and say that good 2 was out there to be made all along, just too expensive to be demanded, then yes, it’s the same! That was the goal of the example: to introduce a Baumol-like effect in a way so similar to how everyone agrees Baumol effects have played out historically that it’s really easy to see what’s going on.
I’m happy to say it’s basically Baumol. The distinction I think is worth emphasizing here is that, when people say “Baumol effects could slow down the growth effects of AI”, they are usually—I think always, in my experience—pointing to the fact that if
AI dramatically speeds productivity growth on most goods but leaves some that only humans can produce (say, artisanal handicrafts), and
consumers see those goods as not very substitutable for the goods we’re getting way more productive at,
then GDP growth won’t speed up much. This then invites the response that, when we look around, there doesn’t seem to be any human-only good strongly satisfying (1) and (2). My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
On whether there is any reason to expect the productivity acceleration to coincide with the “kink in the utility function”
Here I think I disagree with you more substantively, though maybe the disagreement stems from the small framing point above.
If indeed “good 2” were always out there, just waiting for its price to fall, and if a technology were coming that would just replace all our existing workers, factory parts, and innovators with versions that operate more quickly in equal proportion—so that we move along the same paths of technology and the quantity of each good produced, but more quickly—then I agree that the past would be a good guide to the future, and GDP growth would reliably rise a lot. The only way it wouldn’t would be if the goods we were just about to start producing anyway were like good 2, featuring less steeply diminishing marginal utility but way slower productivity growth, so that the rise in productivity growth across the board coincidentally turned up at the same time as the “utility function kink”.
But if the technological advances that are allowing us to automate the production of everything people would ever be able to produce without the advances are also what allow for the invention of goods like good 2, it wouldn’t be a coincidence. I.e. presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth). The idea that we’re soon producing very different products than we otherwise ever would have, whose productivity is growing at very different rates, seems all the more likely to me when we remember that even at 30% growth we’re soon in an economy several orders of magnitude bigger: the kink just needs to show up somewhere, not anywhere near the current margin.
To reiterate what I noted at the beginning though, I’d be surprised if the ambiguous second effect single-handedly outweighed the unambiguously positive first effect. And it could just as well amplify it, if “good 2” exhibits faster than average productivity growth.
Great! I do think the case of constant returns to scale with different uses of capital is also important though, as is the case of constant or mildly decreasing returns to scale with just a little bit of complementarity.
Thanks for pointing me to that post! It’s getting at something very similar.
I should look through the comments there, but briefly, I don’t agree with his idea thatGDP at 1960 prices is basically the right GDP-esque metric to look at to get an idea of “how crazy we should expect the future to look”, from the perspective of someone today. After all, GDP at 1960 prices tells us how crazy today looks from the perspective of someone in the 1960′s.
If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).
Depends how much it costs to lengthen life, and how much more the second added century costs than the first, and what people’s discount rates are… but yes, agreed that allowing for increased lifespan is one way the marginal utility of consumption could really rise!
Hello, thank you for your interest!
Students from other countries can indeed apply. The course itself will be free of charge for anyone accepted.
We also hope to offer some or all attendees room, board, and transportation reimbursement, but how many people will be offered this support, and to what extent, will depend on the funding we receive and on the number, quality, and geographic dispersion of the applicants. When decisions are sent out, we’ll also notify those accepted about what support they are offered.
I think this is a good point, predictably enough—I touch on it in my comment on C/H/M’s original post—but thanks for elaborating on it!
For what it’s worth, I would say that historically, it seems to me that the introduction of new goods has significantly mitigated but not overturned the tendency for consumption increases to lower the marginal utility of consumption. So my central guess is (a) that in the event of a growth acceleration (AI-induced or otherwise), the marginal utility of consumption would in fact fall, and more relevantly (b) that most investors anticipating an AI-induced acceleration to their own consumption growth would expect their marginal utility of consumption to fall. So I think this point identifies a weakness in the argument of the paper/post (as originally written; they now caveat it with this point)--a reason why you can’t literally infer investors’ beliefs about AGI purely from interest rates—but doesn’t in isolation refute the point that a low interest rate is evidence that most investors don’t anticipate AGI soon.
Thanks! No—I’ve spoken with them a little bit about their content but otherwise they were put together independently. Theirs is remote, consists mainly of readings and discussions, and is meant to be at least somewhat more broadly accessible; ours is in person at Stanford, consists mainly of lectures, and is meant mainly for econ grad students and people with similar backgrounds.
Okay great, good to know. Again, my hope here is to present the logic of risk compensation in a way that makes it easy to make up your mind about how you think it applies in some domain, not to argue that it does apply in any domain. (And certainly not to argue that a model stripped down to the point that the only effect going on is a risk compensation effect is a realistic model of any domain!)
As for the role of preference-differences in the AI risk case—if what you’re saying is that there’s no difference at all between capabilities researchers’ and safety researchers’ preferences (rather than just that the distributions overlap), that’s not my own intuition at all. I would think that if I learn
that two people have similar transhuamanist-ey preferences except that one discounts the distant future (or future generations), and so cares primarily about achieving amazing outcomes in the next few decades for people alive today, whereas the other cares primarily about the “expected value of the lightcone”; and
that one works on AI capabilities and the other works on AI safety,
my guess about who was who would be a fair bit better than random.
But I absolutely agree that epistemic disagreement is another reason, and could well be a bigger reason, why different people put different values on safety work relative to capabilities work. I say a few words about how this does / doesn’t change the basic logic of risk compensation in the section on “misperceptions”: nothing much seems to change if the parties just disagree in a proportional way about the magnitude of the risk at any given levels of C and S—though this disagreement can change who prioritizes which kind of work, it doesn’t change how the risk compensation interaction plays out. What really changes things there is if the parties disagree about the effectiveness of marginal increases to S, or really, if they disagree about the extent to which increases to S decrease the extent to which increases to C lower P.
In any event though, if what you’re saying is that a framing more applicable to the AI risk context would have made the epistemic diagreement bit central and the preference disagreement secondary (or swept under the rug entirely), fair enough! Look forward to seeing that presentation of it all if someone writes it up.
My understanding is that the consumption of essentially all animal products seems to increase in income at the country level across the observed range, whether or not you control for various things. See the regression table on slide 7 and the graph of “implied elasticity on income” on slide 8 here.
I’m not seeing the paper itself online anywhere, but maybe reach out to Gustav if you’re interested.
Thank you!
And thanks for the IIT / Pautz reference, that does seem relevant. Especially to my comment on the “superlinearity” intuition that experience should probably be lost, or at least not gained, as the brain is “disintegrated” via corpus callosotomy… let me know (you or anyone else reading this) if you know whether IIT, or some reasonable precisification of it, says that the “amount” of experience associated with two split brain hemispheres is more or less than with an intact brain.
Thanks for noting this possibility—I think it’s the same, or at least very similar, to an intuition Luisa Rodriguez had when we were chatting about this the other day actually. To paraphrase the idea there, even if we have a phenomenal field that’s analogous to our field of vision and one being’s can be bigger than another’s, attention may be sort of like a spotlight that is smaller than the field. Inflicting pains on parts of the body lower welfare up to a point, like adding red dots to a wall in our field of vision with a spotlight on it adds redness to our field of vision, but once the area under the spotlight is full, not much (perhaps not any) more redness is perceived by adding red dots to the shadowy wall outside the spotlight. If in the human case the spotlight is smaller than “the whole body except for one arm”, then it is about equally bad to put the amputee and the non-amputee in an ice bath, or for that matter to put all but one arm of a non-amputee and the whole of a non-amputee in an ice bath.
Something like this seems like a reasonable possibility to me as well. It still doesn’t seem as intuitive to me as the idea that, to continue the metaphor, the spotlight lights the whole field of vision to some extent, even if some parts are brighter than others at any given moment; if all of me except one arm were in an ice bath, I don’t think I’d be close to indifferent about putting the last arm in. But it does seem hard to be sure about these things.
Even if “scope of attention” is the thing that really matters in the way I’m proposing “size” does, though, I think most of what I’m suggesting in this post can be maintained, since presumably “scope” can’t be bigger than “size”, and both can in principle vary across species. And as for how either of those variables scales with neuron count, I get that there are intuitions in both directions, but I think the intuitions I put down on the side of superlinearity apply similarly to “scope”.
Glad to see you found my post thought-provoking, but let me emphasize that my own understanding is also partial at best, to put it mildly!
(FYI though I think we’ve chatted about several new varieties issues that I think could come up in the event of a big change in “growth mode”, and this post is just about one of them.)