Super-exponential growth implies that accelerating growth is unimportant in the long run

The basic argument

A somewhat common goal in EA (more common elsewhere) is to accelerate the human trajectory, by promoting things such as economic growth, tech R&D, or general population growth. One could presume that doing this could accelerate exponential economic growth throughout the very long-run future, which could easily accumulate to huge improvements in welfare.

But the returns to economic growth have historically been super-exponential, so our global economic trend points clearly towards a theoretical economic singularity within the 21st century.[1] This is not contradicted by the ‘inside view’: while there are some indications that our society is currently stagnating,[2] there are also conceivable opportunities for explosive progress in the medium run.[3]

This does not mean we will actually reach infinite economic growth. In practice, a real singularity is presumably impossible. But the forecast does suggest that we will see an acceleration towards extraordinary economic growth until we reach the limits of things like nonrenewable resources, thermodynamics, or lightspeed (call it a ‘transition’), assuming that we don’t suffer a globally catastrophic event first. So, counterfactually growing the economy today has an impact of the following form. First, it makes the economy larger in 2025, much larger in 2030, and so on. Then, near the point of transition, the returns approach extremely high values: a billion dollars of growth today might make the economy a trillion dollars larger in 2045. Then our early economic growth moves the actual date of transition closer, so it has astronomical returns for a short period of time (2046). However, there is little change afterwards; whether or not we had grown the economy in 2020, humanity would still spend 2047-3000++ (possibly more than 10^20 years[4]) under the constraints of fundamental limits. Of course, this period might not last long if we face many existential risks. But the known natural existential risks, like supernovas and asteroids, are generally very rare and/​or avoidable for very advanced spacefaring civilizations. A bigger question is whether internal conflicts would lead to ruin. (I’ve seen some arguments by Robin Hanson and others that warfare and competition will continue indefinitely, but don’t recall any links or titles.) And the Doomsday Argument implies that civilization probably won’t last long, though it is debatable. There is some debate over the nature of the probabilistic argument, and we might be succeeded by posthuman descendants who don’t fit within our reference class. Or we might cure aging, stop reproducing and live indefinitely as a small civilization of billions or tens of billions of people (assuming the Doomsday Argument works through the self-sampling assumption, rather than through the self-indication assumption).

Regardless, a very long future for civilization is likely enough that it greatly outweighs improvements in economic welfare for 2020-2047 (or whatever the dates will be). Therefore, while super-exponential-growth implies that growing the economy produces huge returns in the medium run, it also means that growing the economy is unimportant compared to safeguarding the overall trajectory of the future.

One argument for growth is to maximize the area of space that we can colonize. The ultimate upper bound on our future trajectory is lightspeed. Due to the expansion of the universe, each moment that we delay in colonizing the universe means that more matter goes beyond the ultimate volume of everything that we can ever reach. The volume is theoretically equivalent to the Hubble Volume but is realistically something smaller given that light speed can never be reached. However, it is my informal understanding that the loss from delay is quite small.[5]

If you do some napkin math based on these assumptions, economic growth is not merely less important than reducing existential risk, it simply seems negligible by comparison.

Other considerations

There are still some reasons to care substantively about economic growth. First, model uncertainty. Maybe the world really is settling into steady-state economic growth, despite Roodman’s conclusions. Then accelerating it could cause very large counterfactual accumulation in the long run.

Also, counterfactual economic growth, with its associated technological progress, can affect existential risk in the medium run. Economic growth certainly helps against naturally occurring total existential risks, like huge asteroids striking the planet, but such risks have extremely low base rates and some are even predictably absent in the medium run. Economic growth has ambiguous impacts on existential risk from smaller naturally occurring catastrophes, like the Yellowstone supervolcano or a small asteroid; it gives us greater power to deal with these threats but may also increase the risk that our society becomes so complex that it suffers a cascading failure from one of them. Still though, these risks have very low base rates and so are very unlikely in the medium run.

Anthropogenic existential risks are associated with economic growth in the first place, so it usually seems like accelerating growth doesn’t change the chance that they kill us, it just brings them closer. Just as growth hastens the days when we create existential risks, it also hastens the days that we create solutions to them.

Differential moral/​sociopolitical progress relative to economic growth can change the picture. Rapid economic growth might outstrip the progress in our noneconomic capacities. Of course, moral, social and political changes are largely driven by economic growth, but they also seem to be exogenous to some extent. Such changes may be important for addressing anthropogenic existential risks, in which case slowing down the economy would reduce existential risk. Conversely, it’s possible that exogenous moral and political change produces existential risks and that accelerating economic growth helps us avoid those, but this possibility seems much less likely, considering the track record of human civilization.

At the end of the day, accelerating the human trajectory, including accelerating global economic growth, just cannot be substantiated as a policy priority. It likely has medium term benefits and might have some long run benefits, but the risk of hastening medium-term anthropogenic existential risk is too worrying.

[1] An endogenous model of gross world product using stochastic calculus developed by David Roodman at Open Philanthropy Project: https://​​www.openphilanthropy.org/​​blog/​​modeling-human-trajectory

[2] See the first part of this compilation of Peter Thiel’s views, which has many anecdotes about the ways that progress has stagnated, though it is limited by having a mostly US-centric point of view: https://​​docs.google.com/​​document/​​d/​​1zao_AyBhNb8TPWrQqgXn5NzNAgfEqzTIaFYos7wdqGI/​​edit#

[3] AGI, bioengineering, web-based circumvention of exclusionary economic institutions (mainly, efficient remote work to circumvent barriers against immigration and urban growth), fusion power, geoengineering, undersea/​lunar/​asteroid mining.

[4] Anders Sandberg talk on grand futures: https://​​www.youtube.com/​​watch?v=Y6kaOgjY7-E I am referencing the end of the ‘degenerate era’ of dying cool stars that he discusses a bit over an hour in, but this is pretty arbitrary; he describes orders-of-magnitude longer subsequent eras that might be liveable.

[5] Comments by Ryan Carey.