Hi again, Coming back to this post as I have changed my mind significantly on this topic since my first comment and wanted to share my reasons.
The point is not whether AGI is possible in principle or whether it will eventually be created if science and technology continue making progress — it seems hard to argue otherwise — but that this is not the moment. It’s not even close to the moment.
I used to agree with this statement on the basis that:
I saw AI as basically being a stochastic parrot, and presentations like those of François Chollet (October 2024) convinced me that the lack of ability for abstract reasoning was the reason he believed we were a long time away from AGI. Under this reasoning, we wouldn’t be able to get there unless we figured out a way to teach AI how to reason in a way similar to humans, who are able to compress their knowledge into higher concepts which allow them to form a ‘world concept’ and solve problems they have not been exposed to before.
In addition to that, I viewed physical / economic / supply chain limitations as critical bottlenecks that would slow AI capabilities progress in a significant way
I also sympathised with the argument that “maybe this is an artificial hype” designed by tech bros to attract massive investment, as this wouldn’t have been the first time something like this happened!
However, since, I have changed my mind significantly. I no longer believe TAI or even AGI to be in the distant future. For several reasons:
Recent observable developments in AI A combination of the bitter lesson (which states that historically, brute-force methods of computer problem solving have always outdone more “elegant” ways of computer problem solving), an understanding of scaling laws as a way to squeeze more juice out of the same amount of economic investment in compute, and recent observable developments in AI which showed that we can no longer plausibly deny some level of world concept (see this example for proof that LLMs are able to solve problems that they’ve never read about before, in a very similar way to a human) has made me think that we can actually scale LLMs to AGI using already available methods. Even if we still haven’t figured out a way to make it think like a human, this capability has emerged on its own just due to more effective compute (bitter lesson). The fact that this is not the first time that capabilities that models were not explicitely trained for emerge unexpectedly (we call them ‘emerging capabilities’) just as a result of more compute is another argument in favour of disregarding the need for a new paradigm in training AI in abstraction thinking as a prerequisite for the advent of AGI. What Chollet was arguing, which was that a significant progress in abstract reasoning capabilities was highly unlikely in the near future given current trends, turned out to be proven wrong. See this excerpt from AI Safety Atlas:
In December 2024, OpenAI’s o3 achieved 87.5% on ARC-AGI, a benchmark specifically designed to test abstract reasoning and resist gaming through memorization (Chollet et al., 2024). For four years, progress had crawled from GPT-3′s 0% in 2020 to GPT-4o’s 5% in 2024, leading many to expect meaningful progress would take years. The rapid jump from 5% to 87.5% caught many by surprise.
The last argument, which seals the deal for me in why timelines are probably a lot shorter than I used to think, is that, were we to figure out HOW to teach AI abstraction like Chollet wanted, as opposed to let it naturally evolve out of its current way of training, it would only make the timelines shorter, not longer. This is a further reason to think of AGI as a very valid and near future risk.
Montecarlo simulation of AGI advent by Tom Davidson: in it, it’s rare to find a scenario where AGI doesn’t happen by 2060. See the website Takeoffspeeds for full context on this model and its assumptions etc.
What I see in the world of AI in terms of where the effort is mostly concentrated seems to show that most people working in AI are making the safety problems worse, not better. Very few people are actually working on safety rather than doing work that would make safety measures implementation harder (most people seem to be rushing to make AI ‘solutions’ in a host of industries and businesses which just embeds it deeper and deeper into society even though we haven’t even begun to solve all the possible problems that could happen). This is a “move fast and break things” approach which generates a lot of money for first movers, but with potentially really harmful consequences for society as a whole.
Therefore, I think that a huge investment in AI safety specifically is needed and completely justifiable.
I very much wish someone proves me wrong on this and convince me that AGI isn’t an imminent risk!
I also think that AGI is altogether still quite unlikely in the next decade, but I don’t need AGI happening in the next decade to be worried about AI’s current ability to destabilise our world in a meaningful and potentially catastrophic way.
My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on “traditional risks”: old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general.
Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). I’m also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to.
In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable.
It’s very very possible that AI capabilities’ growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities / increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared.
PS: responding to my first comment about how we don’t yet have a proper definition for AGI—
We do in fact have workable definitions for what AGI could be (such as AI fulfilling 90% of human tasks to a level equal or superior to a human), even though in practice many people don’t bother defining their terms before using them.
Some descriptive terms that I learned about which are useful are below. In practice, they are worth defining more precisely when using them so others know exactly what you mean when you use them.
TAI (transformative AI): The point at which AI has replaced humans so much that it has fundamentally transformed society. (E.g. someone might think that this is reached when AI can do 60% of economically-significant tasks). This is an impact-focussed term rather than a capability focussed term.
AGI: Artificial General Intelligence. The point at which AI can do most human activities at the same level as a human. This is capabilities focussed, and again, what this really means is up to the writer to define.
ASI: Artificial Super Intelligence. The point at which AI can do every human activity better than humans. At this point, arguably, we have no control over it anymore.
Hi again,
Coming back to this post as I have changed my mind significantly on this topic since my first comment and wanted to share my reasons.
I used to agree with this statement on the basis that:
I saw AI as basically being a stochastic parrot, and presentations like those of François Chollet (October 2024) convinced me that the lack of ability for abstract reasoning was the reason he believed we were a long time away from AGI. Under this reasoning, we wouldn’t be able to get there unless we figured out a way to teach AI how to reason in a way similar to humans, who are able to compress their knowledge into higher concepts which allow them to form a ‘world concept’ and solve problems they have not been exposed to before.
In addition to that, I viewed physical / economic / supply chain limitations as critical bottlenecks that would slow AI capabilities progress in a significant way
I also sympathised with the argument that “maybe this is an artificial hype” designed by tech bros to attract massive investment, as this wouldn’t have been the first time something like this happened!
However, since, I have changed my mind significantly. I no longer believe TAI or even AGI to be in the distant future. For several reasons:
Recent observable developments in AI
A combination of the bitter lesson (which states that historically, brute-force methods of computer problem solving have always outdone more “elegant” ways of computer problem solving), an understanding of scaling laws as a way to squeeze more juice out of the same amount of economic investment in compute, and recent observable developments in AI which showed that we can no longer plausibly deny some level of world concept (see this example for proof that LLMs are able to solve problems that they’ve never read about before, in a very similar way to a human) has made me think that we can actually scale LLMs to AGI using already available methods.
Even if we still haven’t figured out a way to make it think like a human, this capability has emerged on its own just due to more effective compute (bitter lesson).
The fact that this is not the first time that capabilities that models were not explicitely trained for emerge unexpectedly (we call them ‘emerging capabilities’) just as a result of more compute is another argument in favour of disregarding the need for a new paradigm in training AI in abstraction thinking as a prerequisite for the advent of AGI. What Chollet was arguing, which was that a significant progress in abstract reasoning capabilities was highly unlikely in the near future given current trends, turned out to be proven wrong. See this excerpt from AI Safety Atlas:
The last argument, which seals the deal for me in why timelines are probably a lot shorter than I used to think, is that, were we to figure out HOW to teach AI abstraction like Chollet wanted, as opposed to let it naturally evolve out of its current way of training, it would only make the timelines shorter, not longer. This is a further reason to think of AGI as a very valid and near future risk.
Montecarlo simulation of AGI advent by Tom Davidson: in it, it’s rare to find a scenario where AGI doesn’t happen by 2060. See the website Takeoffspeeds for full context on this model and its assumptions etc.
What I see in the world of AI in terms of where the effort is mostly concentrated seems to show that most people working in AI are making the safety problems worse, not better. Very few people are actually working on safety rather than doing work that would make safety measures implementation harder (most people seem to be rushing to make AI ‘solutions’ in a host of industries and businesses which just embeds it deeper and deeper into society even though we haven’t even begun to solve all the possible problems that could happen). This is a “move fast and break things” approach which generates a lot of money for first movers, but with potentially really harmful consequences for society as a whole.
Therefore, I think that a huge investment in AI safety specifically is needed and completely justifiable.
I very much wish someone proves me wrong on this and convince me that AGI isn’t an imminent risk!
Much could be said in response to this comment. Probably the most direct and succinct response is my post “Unsolved research problems on the road to AGI”.
Largely for the reasons explained in that post, I think AGI is much less than 0.01% likely in the next decade.
I also think that AGI is altogether still quite unlikely in the next decade, but I don’t need AGI happening in the next decade to be worried about AI’s current ability to destabilise our world in a meaningful and potentially catastrophic way.
My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on “traditional risks”: old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general.
Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). I’m also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to.
In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable.
It’s very very possible that AI capabilities’ growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities / increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared.
PS: responding to my first comment about how we don’t yet have a proper definition for AGI—
We do in fact have workable definitions for what AGI could be (such as AI fulfilling 90% of human tasks to a level equal or superior to a human), even though in practice many people don’t bother defining their terms before using them.
Some descriptive terms that I learned about which are useful are below. In practice, they are worth defining more precisely when using them so others know exactly what you mean when you use them.
TAI (transformative AI): The point at which AI has replaced humans so much that it has fundamentally transformed society. (E.g. someone might think that this is reached when AI can do 60% of economically-significant tasks). This is an impact-focussed term rather than a capability focussed term.
AGI: Artificial General Intelligence. The point at which AI can do most human activities at the same level as a human. This is capabilities focussed, and again, what this really means is up to the writer to define.
ASI: Artificial Super Intelligence. The point at which AI can do every human activity better than humans. At this point, arguably, we have no control over it anymore.