Everything that looks like exponential growth eventually runs into limits and slows down. AI will quite soon run into limits of compute, algorithms, data, scientific progress, and predictability of our world. This reduces the perceived risk posed by AI and gives us more time to adapt.
Disclaimer
Although I have a PhD in Computational Neuroscience, my experience with AI alignment is quite low. I havenât engaged in the field much except for reading Superintelligence and listening to the 80k Hours podcast. Therefore, I may duplicate or overlook arguments obvious to the field or use the wrong terminology.
Introduction
Many arguments I have heard around the risks from AI go a bit like this: We will build an AI that will be as smart as humans, then that AI will be able to improve itself. The slightly better AI will again improve itself in a dangerous feedback loop and exponential growth will ultimately create an AI superintelligence that has a high risk of killing us all.
While I do recognize the other possible dangers of AI, such as engineering pathogens, manipulating media, or replacing human relationships, I will focus on that dangerous feedback loop, or âexponential AI takeoffâ. There are, of course, also risks from human-level-or-slightly-smarter systems, but I believe that the much larger, much less controllable risk would come from âsuperintelligentâł systems. Iâm arguing here that the probability of creating such systems via an âexponential takeoffâ is very low.
Nothing grows exponentially indefinitely
This might be obvious, but letâs start here: Nothing grows exponentially indefinitely. The textbook example for exponential growth is the growth of bacteria cultures. They grow exponentially until they hit the side of their petri dish, and then itâs over. If theyâre not in a lab, they grow exponentially until they hit some other constraint but in the end, all exponential growth is constrained. If youâre lucky, actual growth will look logistic (âS-shapedâ), where the growth rate approaches 0 as resources are eaten up. If youâre unlucky, the population implodes.
For the last decades, we have seen things growing and growing without limit, but weâre slowly seeing a change. Human population is starting to follow an S-curve, the number of scientific papers has been growing fast but is starting to flatten out, and even Silicon Valley has learnt that Metcalfeâs Law of exponential network benefits doesnât work due to the limits imposed by network complexity.
I am assuming that everybody will agree with the general argument above, but the relevant question is: When will we see the âflatteningâ of the curve for AI? Yes, eventually growth is limited, but if that limit kicks in once AI has used up all the resources of our universe, thatâs a bit too late for us. I believe that the limits will kick in as soon as AI will reach our level of knowledge, give or take a magnitude, and here is why:
Weâre reaching the limits of Mooreâs law
First and foremost, the growth of processing power is what enabled the growth of AI. Iâm not going to guess when we reach parity with the processing power of the human brain but even if we do, we wonât grow fast beyond that, because Mooreâs law is slowing down.
Although Iâm not a theoretical physicist, I believe that there is significant evidence, anecdotal and otherwise, that Mooreâs Law is reaching its limits. In 2015, the former Intel CEO stated that âour cadence today is closer to two and a half years than two.â and Wikipedia states that âthe physical limits to transistor scaling have been reachedâ and that âMost forecasters, including Gordon Moore, expect Mooreâs law will end by around 2025.â If we look at the cost of computer memory and storage we see that while it shrunk exponentially for most of the last 50 years, weâre reaching the limits of that already.
I think there are two ways to counter this:
Yes, humans are reaching the limits, but AI will be smarter than us and AI will figure it out.
It doesnât matter because weâll just work with better algorithms and more data.
Iâll start with the second one:
We will probably reach the limits of algorithms
If compute is reaching limits, but we can be increasingly efficient with our compute, then weâll still scale exponentially. OpenAI published an article in 2020 showing that the compute needed is actually decreasing exponentially due to algorithmic improvements and so far weâre not seeing any leveling.
I think that these improvements are fair to expect given that early machine learning researchers were usually not professional software engineers focused on efficiency, so there should be a lot of potential when trying to scale these methods. However, it is theoretically quite clear that there will be limits to this as well: Youâre unlikely to train a billion parameters with one subtraction operation, so the question is again: when will this taper off? Weâve seen similar developments for example with sorting algorithms: Here, the largest efficiency gains were found initially, e.g. from BubbleSort (1956) to QuickSort (1959), while new, slightly better algorithms are still developed to this day (e.g. Timsort 2002).
Either way, both compute and algorithms, even if we make a magical breakthrough in quantum computing tomorrow, are in the end limited by data. DeepMind showed in 2022 (see also here) that more compute only makes sense if you have more data to feed it. So even if we get exponentially scaling compute and algorithms, that would only give us the current models faster, not better. So what are the limits of data?
Weâre reaching the limits of training data
Intuitively, I think it makes sense that data should be the limiting factor of AI growth. A human with an IQ of 150 growing up in the rainforest will be very good at identifying plants, but wonât all of a sudden discover quantum physics. Similarly, an AI trained on only images of trees, even with compute 100 times more than we have now, will not be able to make progress in quantum physics.
(This is where we start to get less quantitative and more hand-wavy but stay with me.) I think itâs fair to assume that a large part of human knowledge is stored in books and on the internet. We are already using most of this to train AIs. OpenAI didnât publish what data they are using to train their models but letâs say itâs 10% of all of the internet and books. Since AI models need exponentially growing training data to get linear performance improvements, that would mean that we can only expect relatively small improvements by feeding it the remaining 90% of the internet, which isnât exactly exponential takeoff.
So letâs say we already use all the internet and books as training data. What else could we do? One extreme option would be to strap a camera and a microphone (similar to Google Glass) to every human, record everything, and feed all of the data into a neural network. Even if we ignore the time it takes to record this data (more on this in the next paragraph), I would argue that the additional information in there is not of the same quality of books and the internet. Language is an information compression tool. We condensed everything we learnt as a human species over the last centuries in books. The additional knowledge gain from following us around would be marginalâmaybe the AI would get a bit better at gossiping, maybe it would get scientific discoveries a year earlier than they are published, or understand human emotions better, but in the end, there is not much to be seen there if the AI has already been exposed to all of our written knowledge.
But even if we reach the limits of training data, canât AI just generate more data?
There are natural limits to the growth of knowledge
âAI will improve itselfâ, âAI will spiral out of controlâ, âAI will enter a positive feedback loop of learningââthese claims all assume that just through reasoning, AI will be able to get better and better, going around all the limitations we looked at so far. We already understood that even if AI could come up with a better training algorithm that would help only marginally, what it would have to do would be to generate novel data /â knowledge on a large scale.
Iâd argue that if it would be that easy, science wouldnât be that hard. There is a reason why we have separate fields of experimental and theoretical physics. A lot of things work in theory, until they are tested in the real world. And that testing is getting more and more cumbersome: While the number of scientific papers has been growing exponentially, in many fields the number of breakthrough discoveries has actually been shrinking exponentially. In Pharma there even is the famous Eroomâs law (Moore spelled backwards) that drug discovery is getting exponentially more difficult. Since the Scientific Revolution, we have picked all the âlow-hanging fruitâ and itâs getting increasingly difficult to âgenerate more dataâ in the sense of generating knowledge.
Iâm sure AI will be able to generate a lot of very good hypotheses by taking in all the current human knowledge, combining it, and advancing science that way, but testing these hypotheses in the real world is a manual process that takes a lot of time. Itâs not something that can explode overnight, and judging by the recent struggles of science weâre reaching limits that AI will probably face sooner rather than later.
But AI doesnât have to act in the real world, and collect real data. Canât it just improve in a simulation, just like the Go AI and Chess AI and Starcraft AI played against themselves in simulations to improve?
We canât simulate knowledge acquisition
We canât simulate our world. If we could, we could generate infinite data but the data that we can simulate is only as good as the assumptions we put into the simulation so itâs again limited by current human knowledge.
Yes, we are using simulations right now to train self-driving cars, and theyâll probably eventually get better than humans, but they are limited by the assumptions we put in the simulation. They wonât be able to anticipate situations that we didnât think of.
The great thing about Go, Chess, and Starcraft is that all of these can be easily simulated and thereby allow AIs to generate knowledge across millions of iterations. The world they are tested in is the same simulated world they are trained in, so this works. For anybody who has ever tried to make a robot work in real life that has been trained in a simulation knows that unfortunately, that doesnât easily translate. Simulations are inherently limited by the assumptions we put into them. There is a reason why AIs that live in a purely theoretical space (such as language models or video game AIs) have had amazing breakthroughs, while robots still struggle with grabbing arbitrary objects. As an example of this, just compare the recent video of DeepMindâs robot soccer players falling over and over again with their impressive advances in StarCraft.
Another way to get around the time it takes to generate novel data would be to massively parallelize it: An AI could make infinite copies of itself and if every copy learns something and pools that knowledge, then that would result in exponential scaling. A chatbot with access to the internet could learn exponentially just by making exponential copies of itself. However, this will need a lot of resources and will still be bound by the time it takes the AIs to perform individual actions or measurments in the real world. While this can speed up AI development, it will still be slow compared to the feedback loops most people think of when thinking of AI progress. Google just closed down yet another of their robot experiments (Everyday Robots) that used this as part of their strategy for learning.
There are natural limits to the predictability of the world
But what if we actually donât need more data? What if all the knowledge we already have as humans, combined in one artificial brain, and with a misguided value system, is enough to outwit our species?
Let me make the most hand-wavy argument so far here: The world is a random, complex, system. We canât predict the weather more than three days in advance, let alone what Trump will tweet tomorrow. There is no reason to believe that an AI 1000x smarter than us would be able to do this because in complex systems, small changes in state can have a massive effect on its outcome. We donât know the full state and a 1000x smarter AI also wonât know the full state due to the difficulty of acquiring knowledge from the real world, as discussed above. âNo plan survives contact with the enemyâ; thatâs because it is impossible, no matter how much compute you have, to predict the enemy. An AI can probably make better guesses than we can, and come up with more alternative plans than we can, but it cannot combat the combinatorial explosion. It has to work with best-guess estimates. These will very quickly lose value in the same way as our best guess estimate of the weather loses value a few days into the future.
So even if, due to some flaw of the above arguments, AI would actually be able to scale exponentially in intelligence, I believe that the application of this intelligence would very quickly run into the limits imposed by the unpredictability of our world, leading again to a logistic growth of power of that AI, and not to an exponential growth.
AI will be very useful and maybe even smarter than us, but it wonât overpower us overnight
I have argued that AI will grow logistically, not exponentially, and that we will see the move to logistical growth quite soon as we approach the current limits of human knowledge. Tricks like simulation wonât get us much further and even if they did, the power of that AI would still only grow logistically due to the limits imposed by the unpredictability of our world.
I have looked at five different constraints on the growth of AI: compute, algorithms, data, scientific progress, and predictability of our world. There are probably other constraints that I didnât consider that could also have a limiting effect on the exponential growth of AI. Claiming that AI will grow exponentially is claiming that there will be NO constraints, which is a much stronger claim than saying that there will be A constraint, because one is enough to limit it from growing exponentially.
If we accept this line of reasoning, then this means that AI has probably an upper limit of a very very intelligent human being who somehow manages to keep all of human knowledge in their head. Thatâs quite impressive, but itâs not the same as an exponentially growing AI. Itâs something we should be very careful with, but not avoid at all costs. I think itâs reasonable to assume that weâll not approach this exponentially but logistically, with the last steps taking much more time than the first ones, which we are witnessing now. We will need to change our laws, adapt our intuitions, regulate the use of AI, and maybe even treat AIs as citizens, but itâs not something that can kill us within a day of reaching superhuman knowledge.
With this in mind, we can focus some of our attention on monitoring AI and working to integrate it into todayâs world, while also not losing sight of all of the other issues we are facing.
Exponential AI takeoff is a myth
TL;DR
Everything that looks like exponential growth eventually runs into limits and slows down. AI will quite soon run into limits of compute, algorithms, data, scientific progress, and predictability of our world. This reduces the perceived risk posed by AI and gives us more time to adapt.
Disclaimer
Although I have a PhD in Computational Neuroscience, my experience with AI alignment is quite low. I havenât engaged in the field much except for reading Superintelligence and listening to the 80k Hours podcast. Therefore, I may duplicate or overlook arguments obvious to the field or use the wrong terminology.
Introduction
Many arguments I have heard around the risks from AI go a bit like this: We will build an AI that will be as smart as humans, then that AI will be able to improve itself. The slightly better AI will again improve itself in a dangerous feedback loop and exponential growth will ultimately create an AI superintelligence that has a high risk of killing us all.
While I do recognize the other possible dangers of AI, such as engineering pathogens, manipulating media, or replacing human relationships, I will focus on that dangerous feedback loop, or âexponential AI takeoffâ. There are, of course, also risks from human-level-or-slightly-smarter systems, but I believe that the much larger, much less controllable risk would come from âsuperintelligentâł systems. Iâm arguing here that the probability of creating such systems via an âexponential takeoffâ is very low.
Nothing grows exponentially indefinitely
This might be obvious, but letâs start here: Nothing grows exponentially indefinitely. The textbook example for exponential growth is the growth of bacteria cultures. They grow exponentially until they hit the side of their petri dish, and then itâs over. If theyâre not in a lab, they grow exponentially until they hit some other constraint but in the end, all exponential growth is constrained. If youâre lucky, actual growth will look logistic (âS-shapedâ), where the growth rate approaches 0 as resources are eaten up. If youâre unlucky, the population implodes.
For the last decades, we have seen things growing and growing without limit, but weâre slowly seeing a change. Human population is starting to follow an S-curve, the number of scientific papers has been growing fast but is starting to flatten out, and even Silicon Valley has learnt that Metcalfeâs Law of exponential network benefits doesnât work due to the limits imposed by network complexity.
I am assuming that everybody will agree with the general argument above, but the relevant question is: When will we see the âflatteningâ of the curve for AI? Yes, eventually growth is limited, but if that limit kicks in once AI has used up all the resources of our universe, thatâs a bit too late for us. I believe that the limits will kick in as soon as AI will reach our level of knowledge, give or take a magnitude, and here is why:
Weâre reaching the limits of Mooreâs law
First and foremost, the growth of processing power is what enabled the growth of AI. Iâm not going to guess when we reach parity with the processing power of the human brain but even if we do, we wonât grow fast beyond that, because Mooreâs law is slowing down.
Although Iâm not a theoretical physicist, I believe that there is significant evidence, anecdotal and otherwise, that Mooreâs Law is reaching its limits. In 2015, the former Intel CEO stated that âour cadence today is closer to two and a half years than two.â and Wikipedia states that âthe physical limits to transistor scaling have been reachedâ and that âMost forecasters, including Gordon Moore, expect Mooreâs law will end by around 2025.â If we look at the cost of computer memory and storage we see that while it shrunk exponentially for most of the last 50 years, weâre reaching the limits of that already.
I think there are two ways to counter this:
Yes, humans are reaching the limits, but AI will be smarter than us and AI will figure it out.
It doesnât matter because weâll just work with better algorithms and more data.
Iâll start with the second one:
We will probably reach the limits of algorithms
If compute is reaching limits, but we can be increasingly efficient with our compute, then weâll still scale exponentially. OpenAI published an article in 2020 showing that the compute needed is actually decreasing exponentially due to algorithmic improvements and so far weâre not seeing any leveling.
I think that these improvements are fair to expect given that early machine learning researchers were usually not professional software engineers focused on efficiency, so there should be a lot of potential when trying to scale these methods. However, it is theoretically quite clear that there will be limits to this as well: Youâre unlikely to train a billion parameters with one subtraction operation, so the question is again: when will this taper off? Weâve seen similar developments for example with sorting algorithms: Here, the largest efficiency gains were found initially, e.g. from BubbleSort (1956) to QuickSort (1959), while new, slightly better algorithms are still developed to this day (e.g. Timsort 2002).
Either way, both compute and algorithms, even if we make a magical breakthrough in quantum computing tomorrow, are in the end limited by data. DeepMind showed in 2022 (see also here) that more compute only makes sense if you have more data to feed it. So even if we get exponentially scaling compute and algorithms, that would only give us the current models faster, not better. So what are the limits of data?
Weâre reaching the limits of training data
Intuitively, I think it makes sense that data should be the limiting factor of AI growth. A human with an IQ of 150 growing up in the rainforest will be very good at identifying plants, but wonât all of a sudden discover quantum physics. Similarly, an AI trained on only images of trees, even with compute 100 times more than we have now, will not be able to make progress in quantum physics.
(This is where we start to get less quantitative and more hand-wavy but stay with me.) I think itâs fair to assume that a large part of human knowledge is stored in books and on the internet. We are already using most of this to train AIs. OpenAI didnât publish what data they are using to train their models but letâs say itâs 10% of all of the internet and books. Since AI models need exponentially growing training data to get linear performance improvements, that would mean that we can only expect relatively small improvements by feeding it the remaining 90% of the internet, which isnât exactly exponential takeoff.
So letâs say we already use all the internet and books as training data. What else could we do? One extreme option would be to strap a camera and a microphone (similar to Google Glass) to every human, record everything, and feed all of the data into a neural network. Even if we ignore the time it takes to record this data (more on this in the next paragraph), I would argue that the additional information in there is not of the same quality of books and the internet. Language is an information compression tool. We condensed everything we learnt as a human species over the last centuries in books. The additional knowledge gain from following us around would be marginalâmaybe the AI would get a bit better at gossiping, maybe it would get scientific discoveries a year earlier than they are published, or understand human emotions better, but in the end, there is not much to be seen there if the AI has already been exposed to all of our written knowledge.
But even if we reach the limits of training data, canât AI just generate more data?
There are natural limits to the growth of knowledge
âAI will improve itselfâ, âAI will spiral out of controlâ, âAI will enter a positive feedback loop of learningââthese claims all assume that just through reasoning, AI will be able to get better and better, going around all the limitations we looked at so far. We already understood that even if AI could come up with a better training algorithm that would help only marginally, what it would have to do would be to generate novel data /â knowledge on a large scale.
Iâd argue that if it would be that easy, science wouldnât be that hard. There is a reason why we have separate fields of experimental and theoretical physics. A lot of things work in theory, until they are tested in the real world. And that testing is getting more and more cumbersome: While the number of scientific papers has been growing exponentially, in many fields the number of breakthrough discoveries has actually been shrinking exponentially. In Pharma there even is the famous Eroomâs law (Moore spelled backwards) that drug discovery is getting exponentially more difficult. Since the Scientific Revolution, we have picked all the âlow-hanging fruitâ and itâs getting increasingly difficult to âgenerate more dataâ in the sense of generating knowledge.
Iâm sure AI will be able to generate a lot of very good hypotheses by taking in all the current human knowledge, combining it, and advancing science that way, but testing these hypotheses in the real world is a manual process that takes a lot of time. Itâs not something that can explode overnight, and judging by the recent struggles of science weâre reaching limits that AI will probably face sooner rather than later.
But AI doesnât have to act in the real world, and collect real data. Canât it just improve in a simulation, just like the Go AI and Chess AI and Starcraft AI played against themselves in simulations to improve?
We canât simulate knowledge acquisition
We canât simulate our world. If we could, we could generate infinite data but the data that we can simulate is only as good as the assumptions we put into the simulation so itâs again limited by current human knowledge.
Yes, we are using simulations right now to train self-driving cars, and theyâll probably eventually get better than humans, but they are limited by the assumptions we put in the simulation. They wonât be able to anticipate situations that we didnât think of.
The great thing about Go, Chess, and Starcraft is that all of these can be easily simulated and thereby allow AIs to generate knowledge across millions of iterations. The world they are tested in is the same simulated world they are trained in, so this works. For anybody who has ever tried to make a robot work in real life that has been trained in a simulation knows that unfortunately, that doesnât easily translate. Simulations are inherently limited by the assumptions we put into them. There is a reason why AIs that live in a purely theoretical space (such as language models or video game AIs) have had amazing breakthroughs, while robots still struggle with grabbing arbitrary objects. As an example of this, just compare the recent video of DeepMindâs robot soccer players falling over and over again with their impressive advances in StarCraft.
Another way to get around the time it takes to generate novel data would be to massively parallelize it: An AI could make infinite copies of itself and if every copy learns something and pools that knowledge, then that would result in exponential scaling. A chatbot with access to the internet could learn exponentially just by making exponential copies of itself. However, this will need a lot of resources and will still be bound by the time it takes the AIs to perform individual actions or measurments in the real world. While this can speed up AI development, it will still be slow compared to the feedback loops most people think of when thinking of AI progress. Google just closed down yet another of their robot experiments (Everyday Robots) that used this as part of their strategy for learning.
There are natural limits to the predictability of the world
But what if we actually donât need more data? What if all the knowledge we already have as humans, combined in one artificial brain, and with a misguided value system, is enough to outwit our species?
Let me make the most hand-wavy argument so far here: The world is a random, complex, system. We canât predict the weather more than three days in advance, let alone what Trump will tweet tomorrow. There is no reason to believe that an AI 1000x smarter than us would be able to do this because in complex systems, small changes in state can have a massive effect on its outcome. We donât know the full state and a 1000x smarter AI also wonât know the full state due to the difficulty of acquiring knowledge from the real world, as discussed above. âNo plan survives contact with the enemyâ; thatâs because it is impossible, no matter how much compute you have, to predict the enemy. An AI can probably make better guesses than we can, and come up with more alternative plans than we can, but it cannot combat the combinatorial explosion. It has to work with best-guess estimates. These will very quickly lose value in the same way as our best guess estimate of the weather loses value a few days into the future.
So even if, due to some flaw of the above arguments, AI would actually be able to scale exponentially in intelligence, I believe that the application of this intelligence would very quickly run into the limits imposed by the unpredictability of our world, leading again to a logistic growth of power of that AI, and not to an exponential growth.
AI will be very useful and maybe even smarter than us, but it wonât overpower us overnight
I have argued that AI will grow logistically, not exponentially, and that we will see the move to logistical growth quite soon as we approach the current limits of human knowledge. Tricks like simulation wonât get us much further and even if they did, the power of that AI would still only grow logistically due to the limits imposed by the unpredictability of our world.
I have looked at five different constraints on the growth of AI: compute, algorithms, data, scientific progress, and predictability of our world. There are probably other constraints that I didnât consider that could also have a limiting effect on the exponential growth of AI. Claiming that AI will grow exponentially is claiming that there will be NO constraints, which is a much stronger claim than saying that there will be A constraint, because one is enough to limit it from growing exponentially.
If we accept this line of reasoning, then this means that AI has probably an upper limit of a very very intelligent human being who somehow manages to keep all of human knowledge in their head. Thatâs quite impressive, but itâs not the same as an exponentially growing AI. Itâs something we should be very careful with, but not avoid at all costs. I think itâs reasonable to assume that weâll not approach this exponentially but logistically, with the last steps taking much more time than the first ones, which we are witnessing now. We will need to change our laws, adapt our intuitions, regulate the use of AI, and maybe even treat AIs as citizens, but itâs not something that can kill us within a day of reaching superhuman knowledge.
With this in mind, we can focus some of our attention on monitoring AI and working to integrate it into todayâs world, while also not losing sight of all of the other issues we are facing.