[Link post] Are we approaching the singularity?
Nobel Prize winning economist William Nordhaus has written a paper called ‘Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth’. NBER here and 2021 published paper.
He discusses various tests of whether the singularity—a large trend break in economic growth—is near. He argues that the tests suggest that the singularity is not near, i.e. not before 2100. I would be interested to hear what people think about whether this is a good test of AI timeline predictions.
- 14 Feb 2021 5:44 UTC; 2 points) 's comment on Super-exponential growth implies that accelerating growth is unimportant in the long run by (
The relevant section is VII. Summarizing the six empirical tests:
You’d expect productivity growth to accelerate as you approach the singularity, but it is slowing.
The capital share should approach 100% as you approach the singularity. The share is growing, but at the slow rate of ~0.5%/year. At that rate it would take roughly 100 years to approach 100%.
Capital should get very cheap as you approach the singularity. But capital costs (outside of computers) are falling relatively slowly.
The total stock of capital should get large as you approach the singularity. In fact the stock of capital is slowly falling relative to output.
Information should become an increasingly important part of the capital stock as you approach the singularity. This share is increasing, but will also take >100 years to become dominant.
Wage grow should accelerate as you approach the singularity, but it is slowing.
I would group these into two basic classes of evidence:
We aren’t getting much more productive, but that’s what a singularity is supposed to be all about.
Capital and IT extrapolations are potentially compatible with a singularity, but only a timescale of 100+ years.
I’d agree that these seem like two points of evidence against singularity-soon, and I think that if I were going on outside-view economic arguments I’d probably be <50% singularity by 2100. (Though I’d still have a meaningful probability soon, and even at 100 years the prospect of a singularity would be one of the most important facts about the basic shape of the future.)
There are some more detailed aspects of the model that I don’t buy, e.g. the very high share of information capital and persistent slow growth of physical capital. But I don’t think they really affect the bottom line.
Thanks for outlining the tests.
I’m not really sure what he thinks the probability of the singularity before 2100 is. My reading was that he probably doesn’t think that given his tests, the singularity is (eg) >10% likely before 2100. 2 of the 7 tests suggest the singularity after 100 years and 5 of them fail. It might be worth someone asking him for his view on that
To what extent is this a repudiation of Roodman’s outside-view projection? My guess is you’d say something like “This new paper is more detailed and trustworthy than Roodman’s simple model, so I’m assigning it more weight, but still putting a decent amount of weight on Roodman’s being roughly correct and that’s why I said <50% instead of <10%.”
I think that acceleration is autocorrelated—if things are accelerating rapidly at time T they are also more likely to be accelerating rapidly at time T+1. That’s intuitively pretty likely, and it seems to show up pretty strongly in the data. Roodman makes no attempt to model it, in the interest of simplicity and analytical tractability. We are currently in a stagnant period, and so I think you should expect continuing stagnation. I’m not sure exactly how large the effect (and obviously it depends on the model) is but I think it’s at least a 20-40 year delay. (There are two related angles to get a sense for the effect: one is to observe that autocorrelations seem to fade away on the timescale of a few doublings, rather than being driven by some amount of calendar time, and the other is to just look at the fact that we’ve had something like ~40 years of relative stagnation.)
I think it’s plausible that historical acceleration is driven by population growth, and that just won’t really happen going forward. So at a minimum we should be uncertain betwe3en roodman’s model and one that separates out population explicitly, which will tend to stagnate around the time population is limited by fertility rather than productivity.
(I agree with Max Daniel below that I don’t think that Nordhaus’ methodology is inherently more trustworthy. I think it’s dealing with a relatively small amount of pretty short-term data, and is generally using a much more opinionated model of what technological change would look like.)
I don’t think this would be a good reaction because:
Nordhaus’s paper was only formally published now, but isn’t substantially newer than Roodman’s work. Nordhaus’s paper was available as NBER working paper since at least 2018, and has been widely discussed among longtermists since then (e.g. I remember a conversation in fall 2018, there may have been earlier ones). [ETA: Actually Nordhaus’s paper has circulated as a working/discussion paper since at least September 2015, and was e.g. mentioned in this piece of longtermist work from 2017.]
There are other similar papers, e.g. by Aghion et al. See e.g. here (there is now also an edited volume of “Econ of AI” conference papers) and the GovAI webinar with Jones & Jones (need to scroll down on that page).
I’ve only had the chance to skim Roodman’s work, but my quick impression is that it isn’t straightforwardly the case that Nordhaus’s model is “more detailed and trustworthy”. Rather, it seems to me that both models are more detailed along different dimensions: Roodman’s model explicitly incorporates noise/stochasticity, and in this sense is significantly more mathematically complex/sophisticated. On the other hand, Nordhaus’s model incorporates more theoretical assumptions, e.g. about different types of “factors of production” and their relationship as represented by a “production function”, similar to typical economic growth models. (Whereas Roodman is mostly fitting a model to a trend of a single quantity, in a way that’s more agnostic about the theoretical mechanisms generating that trend.)
As a matter of interest, where do papers such as this usually get discussed? Is it in personal conversation or in some particular online location?
I think in this case mostly informal personal conversations (which can include conversations e.g. within particular org’s Slack groups or similar). It might also have been a slight overstatement that the paper was “widely discussed”—this impression might be due to a “selection effect” of me having noticed the paper early and being interested in such work.