Exponential AI takeoff is a myth

TL;DR

Everything that looks like exponential growth eventually runs into limits and slows down. AI will quite soon run into limits of compute, algorithms, data, scientific progress, and predictability of our world. This reduces the perceived risk posed by AI and gives us more time to adapt.

Disclaimer

Although I have a PhD in Computational Neuroscience, my experience with AI alignment is quite low. I haven’t engaged in the field much except for reading Superintelligence and listening to the 80k Hours podcast. Therefore, I may duplicate or overlook arguments obvious to the field or use the wrong terminology.

Introduction

Many arguments I have heard around the risks from AI go a bit like this: We will build an AI that will be as smart as humans, then that AI will be able to improve itself. The slightly better AI will again improve itself in a dangerous feedback loop and exponential growth will ultimately create an AI superintelligence that has a high risk of killing us all.

While I do recognize the other possible dangers of AI, such as engineering pathogens, manipulating media, or replacing human relationships, I will focus on that dangerous feedback loop, or “exponential AI takeoff”. There are, of course, also risks from human-level-or-slightly-smarter systems, but I believe that the much larger, much less controllable risk would come from “superintelligent″ systems. I’m arguing here that the probability of creating such systems via an “exponential takeoff” is very low.

Nothing grows exponentially indefinitely

This might be obvious, but let’s start here: Nothing grows exponentially indefinitely. The textbook example for exponential growth is the growth of bacteria cultures. They grow exponentially until they hit the side of their petri dish, and then it’s over. If they’re not in a lab, they grow exponentially until they hit some other constraint but in the end, all exponential growth is constrained. If you’re lucky, actual growth will look logistic (”S-shaped”), where the growth rate approaches 0 as resources are eaten up. If you’re unlucky, the population implodes.

For the last decades, we have seen things growing and growing without limit, but we’re slowly seeing a change. Human population is starting to follow an S-curve, the number of scientific papers has been growing fast but is starting to flatten out, and even Silicon Valley has learnt that Metcalfe’s Law of exponential network benefits doesn’t work due to the limits imposed by network complexity.

I am assuming that everybody will agree with the general argument above, but the relevant question is: When will we see the “flattening” of the curve for AI? Yes, eventually growth is limited, but if that limit kicks in once AI has used up all the resources of our universe, that’s a bit too late for us. I believe that the limits will kick in as soon as AI will reach our level of knowledge, give or take a magnitude, and here is why:

We’re reaching the limits of Moore’s law

First and foremost, the growth of processing power is what enabled the growth of AI. I’m not going to guess when we reach parity with the processing power of the human brain but even if we do, we won’t grow fast beyond that, because Moore’s law is slowing down.

Although I’m not a theoretical physicist, I believe that there is significant evidence, anecdotal and otherwise, that Moore’s Law is reaching its limits. In 2015, the former Intel CEO stated that “our cadence today is closer to two and a half years than two.” and Wikipedia states that “the physical limits to transistor scaling have been reached” and that “Most forecasters, including Gordon Moore, expect Moore’s law will end by around 2025.” If we look at the cost of computer memory and storage we see that while it shrunk exponentially for most of the last 50 years, we’re reaching the limits of that already.

I think there are two ways to counter this:

  1. Yes, humans are reaching the limits, but AI will be smarter than us and AI will figure it out.

  2. It doesn’t matter because we’ll just work with better algorithms and more data.

I’ll start with the second one:

We will probably reach the limits of algorithms

If compute is reaching limits, but we can be increasingly efficient with our compute, then we’ll still scale exponentially. OpenAI published an article in 2020 showing that the compute needed is actually decreasing exponentially due to algorithmic improvements and so far we’re not seeing any leveling.

I think that these improvements are fair to expect given that early machine learning researchers were usually not professional software engineers focused on efficiency, so there should be a lot of potential when trying to scale these methods. However, it is theoretically quite clear that there will be limits to this as well: You’re unlikely to train a billion parameters with one subtraction operation, so the question is again: when will this taper off? We’ve seen similar developments for example with sorting algorithms: Here, the largest efficiency gains were found initially, e.g. from BubbleSort (1956) to QuickSort (1959), while new, slightly better algorithms are still developed to this day (e.g. Timsort 2002).

Either way, both compute and algorithms, even if we make a magical breakthrough in quantum computing tomorrow, are in the end limited by data. DeepMind showed in 2022 (see also here) that more compute only makes sense if you have more data to feed it. So even if we get exponentially scaling compute and algorithms, that would only give us the current models faster, not better. So what are the limits of data?

We’re reaching the limits of training data

Intuitively, I think it makes sense that data should be the limiting factor of AI growth. A human with an IQ of 150 growing up in the rainforest will be very good at identifying plants, but won’t all of a sudden discover quantum physics. Similarly, an AI trained on only images of trees, even with compute 100 times more than we have now, will not be able to make progress in quantum physics.

(This is where we start to get less quantitative and more hand-wavy but stay with me.) I think it’s fair to assume that a large part of human knowledge is stored in books and on the internet. We are already using most of this to train AIs. OpenAI didn’t publish what data they are using to train their models but let’s say it’s 10% of all of the internet and books. Since AI models need exponentially growing training data to get linear performance improvements, that would mean that we can only expect relatively small improvements by feeding it the remaining 90% of the internet, which isn’t exactly exponential takeoff.

So let’s say we already use all the internet and books as training data. What else could we do? One extreme option would be to strap a camera and a microphone (similar to Google Glass) to every human, record everything, and feed all of the data into a neural network. Even if we ignore the time it takes to record this data (more on this in the next paragraph), I would argue that the additional information in there is not of the same quality of books and the internet. Language is an information compression tool. We condensed everything we learnt as a human species over the last centuries in books. The additional knowledge gain from following us around would be marginal—maybe the AI would get a bit better at gossiping, maybe it would get scientific discoveries a year earlier than they are published, or understand human emotions better, but in the end, there is not much to be seen there if the AI has already been exposed to all of our written knowledge.

But even if we reach the limits of training data, can’t AI just generate more data?

There are natural limits to the growth of knowledge

“AI will improve itself”, “AI will spiral out of control”, “AI will enter a positive feedback loop of learning”—these claims all assume that just through reasoning, AI will be able to get better and better, going around all the limitations we looked at so far. We already understood that even if AI could come up with a better training algorithm that would help only marginally, what it would have to do would be to generate novel data /​ knowledge on a large scale.

I’d argue that if it would be that easy, science wouldn’t be that hard. There is a reason why we have separate fields of experimental and theoretical physics. A lot of things work in theory, until they are tested in the real world. And that testing is getting more and more cumbersome: While the number of scientific papers has been growing exponentially, in many fields the number of breakthrough discoveries has actually been shrinking exponentially. In Pharma there even is the famous Eroom’s law (Moore spelled backwards) that drug discovery is getting exponentially more difficult. Since the Scientific Revolution, we have picked all the “low-hanging fruit” and it’s getting increasingly difficult to “generate more data” in the sense of generating knowledge.

I’m sure AI will be able to generate a lot of very good hypotheses by taking in all the current human knowledge, combining it, and advancing science that way, but testing these hypotheses in the real world is a manual process that takes a lot of time. It’s not something that can explode overnight, and judging by the recent struggles of science we’re reaching limits that AI will probably face sooner rather than later.

But AI doesn’t have to act in the real world, and collect real data. Can’t it just improve in a simulation, just like the Go AI and Chess AI and Starcraft AI played against themselves in simulations to improve?

We can’t simulate knowledge acquisition

We can’t simulate our world. If we could, we could generate infinite data but the data that we can simulate is only as good as the assumptions we put into the simulation so it’s again limited by current human knowledge.

Yes, we are using simulations right now to train self-driving cars, and they’ll probably eventually get better than humans, but they are limited by the assumptions we put in the simulation. They won’t be able to anticipate situations that we didn’t think of.

The great thing about Go, Chess, and Starcraft is that all of these can be easily simulated and thereby allow AIs to generate knowledge across millions of iterations. The world they are tested in is the same simulated world they are trained in, so this works. For anybody who has ever tried to make a robot work in real life that has been trained in a simulation knows that unfortunately, that doesn’t easily translate. Simulations are inherently limited by the assumptions we put into them. There is a reason why AIs that live in a purely theoretical space (such as language models or video game AIs) have had amazing breakthroughs, while robots still struggle with grabbing arbitrary objects. As an example of this, just compare the recent video of DeepMind’s robot soccer players falling over and over again with their impressive advances in StarCraft.

Another way to get around the time it takes to generate novel data would be to massively parallelize it: An AI could make infinite copies of itself and if every copy learns something and pools that knowledge, then that would result in exponential scaling. A chatbot with access to the internet could learn exponentially just by making exponential copies of itself. However, this will need a lot of resources and will still be bound by the time it takes the AIs to perform individual actions or measurments in the real world. While this can speed up AI development, it will still be slow compared to the feedback loops most people think of when thinking of AI progress. Google just closed down yet another of their robot experiments (Everyday Robots) that used this as part of their strategy for learning.

There are natural limits to the predictability of the world

But what if we actually don’t need more data? What if all the knowledge we already have as humans, combined in one artificial brain, and with a misguided value system, is enough to outwit our species?

Let me make the most hand-wavy argument so far here: The world is a random, complex, system. We can’t predict the weather more than three days in advance, let alone what Trump will tweet tomorrow. There is no reason to believe that an AI 1000x smarter than us would be able to do this because in complex systems, small changes in state can have a massive effect on its outcome. We don’t know the full state and a 1000x smarter AI also won’t know the full state due to the difficulty of acquiring knowledge from the real world, as discussed above. “No plan survives contact with the enemy”; that’s because it is impossible, no matter how much compute you have, to predict the enemy. An AI can probably make better guesses than we can, and come up with more alternative plans than we can, but it cannot combat the combinatorial explosion. It has to work with best-guess estimates. These will very quickly lose value in the same way as our best guess estimate of the weather loses value a few days into the future.

So even if, due to some flaw of the above arguments, AI would actually be able to scale exponentially in intelligence, I believe that the application of this intelligence would very quickly run into the limits imposed by the unpredictability of our world, leading again to a logistic growth of power of that AI, and not to an exponential growth.

AI will be very useful and maybe even smarter than us, but it won’t overpower us overnight

I have argued that AI will grow logistically, not exponentially, and that we will see the move to logistical growth quite soon as we approach the current limits of human knowledge. Tricks like simulation won’t get us much further and even if they did, the power of that AI would still only grow logistically due to the limits imposed by the unpredictability of our world.

I have looked at five different constraints on the growth of AI: compute, algorithms, data, scientific progress, and predictability of our world. There are probably other constraints that I didn’t consider that could also have a limiting effect on the exponential growth of AI. Claiming that AI will grow exponentially is claiming that there will be NO constraints, which is a much stronger claim than saying that there will be A constraint, because one is enough to limit it from growing exponentially.

If we accept this line of reasoning, then this means that AI has probably an upper limit of a very very intelligent human being who somehow manages to keep all of human knowledge in their head. That’s quite impressive, but it’s not the same as an exponentially growing AI. It’s something we should be very careful with, but not avoid at all costs. I think it’s reasonable to assume that we’ll not approach this exponentially but logistically, with the last steps taking much more time than the first ones, which we are witnessing now. We will need to change our laws, adapt our intuitions, regulate the use of AI, and maybe even treat AIs as citizens, but it’s not something that can kill us within a day of reaching superhuman knowledge.

With this in mind, we can focus some of our attention on monitoring AI and working to integrate it into today’s world, while also not losing sight of all of the other issues we are facing.