Postdoc at the Digital Economy Lab, Stanford, and research affiliate at the Global Priorities Institute, Oxford. I’m slightly less ignorant about economic theory than about everything else.
trammell
In Young’s case the exponent on ideas is one, and progress looks like log(log(researchers)). (You need to pay a fixed cost to make the good at all in a given period, so only if you go above that do you make positive progress.) See Section 2.2.
Peretto (2018) and Massari and Peretto (2025) have SWE models that I think do successfully avoid the knife-edge issue (or “linearity critique”), but at the cost of, in some sense, digging the hole deeper when it comes to the excess variety issue.
Thanks!
And yeah, that’s fair. One possible SWE-style story I sort of hint at there is that we have preferences like the ones I use in the horses paper; process efficiency for any given product grows exponentially with a fixed population; and there are fixed labor costs to producing any given product. In this case, it’s clear that measured GDP/capita growth will be exponential (but all “vertical”) with a fixed population. But if you set things up in just the right way, so that measured GDP always increases by the same proportion when the range of products increases by some marginal proportion, it will also be exponential with a growing population (“vertical”+”horizontal”).
But I think it’s hard to not have this all be a bit ad-hoc / knife-edge. E.g. you’ll typically have to start out ever less productive at making the new products, or else the contribution to real GDP of successive % increases in the product range will blow up: as you satiate in existing products, you’re willing to trade ever more of them for a proportional increase in variety. Alternatively, you can say that the range of products grows subexponentially when the population grows exponentially, because the fixed costs of the later products are higher.
Second-wave endogenous growth models and automation
A bit tangential, but I can’t help sharing a data point I came across recently on how prepared the US government currently is for advanced AI: our secretary of education apparently thinks it stands for “A1”, like the steak sauce (h/t). (On the bright side, of course, this is a department the administration is looking to shut down.)
(FYI though I think we’ve chatted about several new varieties issues that I think could come up in the event of a big change in “growth mode”, and this post is just about one of them.)
No worries, good to hear!
Thanks! People have certainly argued at least since Marx that if the people owning the capital get all the income, that will affect the state. I think more recent/quantitative work on this, e.g. by Stiglitz, has generally focused on the effects of inequality in wealth or income, rather than the effects of inequality via a high capital share per se. But this isn’t my area at all—ask your favorite LLM : )
The reference point argument is also about consumption inequality rather than what gives rise to it. My guess would be that if we all really get radical life extension and a huge quantity of amazing goods and services, that will probably for most people outweigh whatever jealousy comes with the knowledge that others got more, but who knows.In any event, my guess would be that even if the marginal product of labor stays high or rises following full automation, most people’s incomes will eventually come not from wages, but from interest on whatever investments they have (even if they started small) or from redistribution. And full automation could well trigger so much redistribution that income inequality shrinks, since it will remove one motivation for letting income inequality remain high today, namely that unlike with robots, taxing productive people too much can discourage them from working as much.
Great to hear, thanks!
As for the prediction—fair enough. Just to clarify though, I’m worried that the example makes it look like we need growth in the new good(s) to get this weird slow GDP growth result, but that’s not true. In case that’s the impression you got, this example illustrates how we can have superexponential growth in every good but (arbitrarily slow) exponential growth in GDP.
Here’s an example in which utility is additively separable, is identical for all goods, the productivity and quantity of all goods grow hyperbolically, and yet GDP grows exponentially.
Ok, fair enough—thanks for getting me to make it clearer :). So I guess the disagreement (if any remains, post-retitling/etc) is just about how plausible we think it is that the technological advances that accompany full automation will be accompanied by further technological advances that counterintuitively slow GDP growth through the “new-products-Baumol” mechanism illustrated here. I don’t think that’s so implausible, and hopefully the note I’ll write later will make it clearer where I’m coming from on that.
But this post isn’t aiming to argue for the plausibility, just the possibility. It seems to me that a lot of discussion of this issue hasn’t noticed that it’s even a theoretical possibility.
not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive… Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle
Okay, I’m happy to change the title to (a more concise version of) “the ambiguous effect of a technological advancement that achieves full automation, and also allows new goods to be introduced on GDP growth” if that would resolve the disagreement. [Update: have just changed the title and a few words of the body text; let me know.]
On the second point: in practice I don’t think we have additively separable utility, and I don’t know what you mean by “extracting this from our utility function”. But anyway, if I’m understanding you, that is wrong: if your utility function is additively separable with an upper bound in each good, say , a technological shift can yield superexponential growth in the quantity of each n but exponential GDP growth. I’ll write up a note on how that works this evening if that would be helpful, but I was hoping this post could just be a maximally simple illustration of the more limited point that Baumol-like effects can slow growth even past the point of full automation.
Right, I’m just not taking a stand here.
I might be more pessimistic than you about wages on balance, since I would argue for the importance of the “reallocation of capital from labor-augmenting to non-labor-augmenting uses” point, which if strong enough could lower wages through a channel other than the DRS+PS one you focus on.
Hey Jess, thanks for the thoughtful comments.
On whether “this is still basically Baumol”
If we make that one tiny tweak and say that good 2 was out there to be made all along, just too expensive to be demanded, then yes, it’s the same! That was the goal of the example: to introduce a Baumol-like effect in a way so similar to how everyone agrees Baumol effects have played out historically that it’s really easy to see what’s going on.
I’m happy to say it’s basically Baumol. The distinction I think is worth emphasizing here is that, when people say “Baumol effects could slow down the growth effects of AI”, they are usually—I think always, in my experience—pointing to the fact that if
AI dramatically speeds productivity growth on most goods but leaves some that only humans can produce (say, artisanal handicrafts), and
consumers see those goods as not very substitutable for the goods we’re getting way more productive at,
then GDP growth won’t speed up much. This then invites the response that, when we look around, there doesn’t seem to be any human-only good strongly satisfying (1) and (2). My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
On whether there is any reason to expect the productivity acceleration to coincide with the “kink in the utility function”
Here I think I disagree with you more substantively, though maybe the disagreement stems from the small framing point above.
If indeed “good 2” were always out there, just waiting for its price to fall, and if a technology were coming that would just replace all our existing workers, factory parts, and innovators with versions that operate more quickly in equal proportion—so that we move along the same paths of technology and the quantity of each good produced, but more quickly—then I agree that the past would be a good guide to the future, and GDP growth would reliably rise a lot. The only way it wouldn’t would be if the goods we were just about to start producing anyway were like good 2, featuring less steeply diminishing marginal utility but way slower productivity growth, so that the rise in productivity growth across the board coincidentally turned up at the same time as the “utility function kink”.
But if the technological advances that are allowing us to automate the production of everything people would ever be able to produce without the advances are also what allow for the invention of goods like good 2, it wouldn’t be a coincidence. I.e. presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth). The idea that we’re soon producing very different products than we otherwise ever would have, whose productivity is growing at very different rates, seems all the more likely to me when we remember that even at 30% growth we’re soon in an economy several orders of magnitude bigger: the kink just needs to show up somewhere, not anywhere near the current margin.
To reiterate what I noted at the beginning though, I’d be surprised if the ambiguous second effect single-handedly outweighed the unambiguously positive first effect. And it could just as well amplify it, if “good 2” exhibits faster than average productivity growth.
Great! I do think the case of constant returns to scale with different uses of capital is also important though, as is the case of constant or mildly decreasing returns to scale with just a little bit of complementarity.
Thanks for pointing me to that post! It’s getting at something very similar.
I should look through the comments there, but briefly, I don’t agree with his idea thatGDP at 1960 prices is basically the right GDP-esque metric to look at to get an idea of “how crazy we should expect the future to look”, from the perspective of someone today. After all, GDP at 1960 prices tells us how crazy today looks from the perspective of someone in the 1960′s.
If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).
The ambiguous effect of full automation + new goods on GDP growth
The ambiguous effect of full automation on wages
Depends how much it costs to lengthen life, and how much more the second added century costs than the first, and what people’s discount rates are… but yes, agreed that allowing for increased lifespan is one way the marginal utility of consumption could really rise!
Hello, thank you for your interest!
Students from other countries can indeed apply. The course itself will be free of charge for anyone accepted.
We also hope to offer some or all attendees room, board, and transportation reimbursement, but how many people will be offered this support, and to what extent, will depend on the funding we receive and on the number, quality, and geographic dispersion of the applicants. When decisions are sent out, we’ll also notify those accepted about what support they are offered.
I know David well, and David, if you’re reading this, apologies if it comes across as a bit uncharitable. But as far as I’ve ever been able to see, every important argument he makes in any of his papers against longtermism or the astronomical value of x-risk reduction was refuted pretty unambiguously before it was written. An unfortunate feature of an objection that comes after its own rebuttal is that sometimes people familiar with the arguments will skim it and say “weird, nothing new here” and move on, and people encountering it for the first time will think no response has been made.
For example,[1] I think the standard response to his arguments in “The Scope of Longtermism” would just be the Greaves and MacAskill “Case for Strong Longtermism”.[2] The Case, in a nutshell, is that by giving to the Planetary Society or B612 Foundation to improve our asteroid/comet monitoring, we do more than 2x as much good in the long term, even on relatively low estimates of the value of the future, than giving to the top GiveWell charity does in the short term. So if you think GiveWell tells us the most cost-effective way to improve the short term, you have to think that, whenever your decision problem is “where to give a dollar”, the overall best action does more good in the long term than in the short term.
You can certainly disagree with this argument on various grounds—e.g. you can think that non-GiveWell charities do much more good in the short term, or that the value of preventing extinction by asteroid is negative, or for that matter that the Planetary Society or B612 Foundation will just steal the money—but not with the arguments David offers in “The Scope of Longtermism”.
His argument [again, in a nutshell] is that there are three common “scope-limiting phenomena”, i.e. phenomena that make it the case that the overall best action does more good in the long term than in the short term in relatively few decision situations. These are
rapid diminution (the positive impact of the action per unit time quickly falls to 0),
washing out (the long-term impact of the action has positive and negative features which are hard to predict and cancel out in expectation), and
option unawareness (there’s an action that would empirically have large long-term impact but we don’t know what it is).
He grants that when Congress was deciding what to do with the money that originally went into an asteroid monitoring program called the Space Guard Survey, longtermism seems to have held. So he’s explicitly not relying on an argument that there isn’t much value to trying to prevent x-risk from asteroids. Nevertheless, he never addresses the natural follow-up regarding contributing to improved asteroid monitoring today.
Re (1), he cites Kelly (2019) and Sevilla (2021) as reasons to be skeptical of claims from the “persistence” literature about various distant cultural, technological, or military developments having had long-term effects on the arc of history. Granting this doesn’t affect the Case that whenever your decision problem is “where to give a dollar”, the overall best action does more good in the long term than in the short term.[3]
Re (2), he says that we often have only weak evidence about a given action’s impact on the long-term future. He defends this by pointing out (a) that attempts to forecast actions’ impacts on a >20 year timescale have a mixed track record, (b) that professional forecasters are often skeptical of the ability to make such forecasts, and (c) that the overall impact of an action on the value of world is typically composed of its impacts on various other variables (e.g. the number of people and how well-off they are), and since it’s hard to forecast any of these components, it’s typically even harder to forecast the action’s impact on value itself. None of this applies to the Case. We can grant that most actions have hard-to-predict long-term consequences, and that forecasters would recognize this, without denying that in most decision-situations (including all those where the question is where to give a dollar), there is one action that has long-term benefits more than 2x as great as the short-term benefits of giving to the top GiveWell charity: namely the action of giving to the Planetary Society or B612 Foundation. There is not a mixed track record of forecasting the >20 year impact of asteroid/comet monitoring, and no evidence that professional forecasters are skeptical of making such forecasts, and he implicitly grants that the complexity of forecasting its long-term impact on value isn’t an issue in this case when it comes to the Space Guard Survey.
Re (3), again, the claim the Case makes is that we have identified one such action.
I also emailed him about an objection to his “Existential Risk Pessimism and the Time of Perils” in November and followed up in February, but he’s responded only to say that he’s been too busy to consider it.
Which he cites! Note that Greaves and MacAskill defend a stronger view than the one I’m presenting here, in particular that all near-best actions do much more good in the long term than in the short term. But what David argues against is the weaker view I lay out here.
Incidentally, he cites the fact that “Hiroshima and Nagasaki returned to their pre-war population levels by the mid-1950s” as an especially striking illustration of lack of persistence. But as I mentioned to him at the time, it’s compatible with the possibility that those regions have some population path, and we “jumped back in time” on it, such that from now on the cities always have about as many people at t as they would have had at t+10. If so, bombing them could still have most of its effects in the future.