I donât think anyone should be able to confidently say that we are more than a single 10x or breakthrough away from machines being smarter than us.
Very prominent deep learning experts who are otherwise among the most bullish public figures in the world on AI such as Ilya Sutskever (AlexNet co-author, OpenAI co-founder and Chief Scientist, now runs Safe Superintelligence) and Demis Hassabis (DeepMind co-founder, Google DeepMind CEO, Nobel Prize winner for AI work) both say that multiple research breakthroughs are needed. Sutskever specifically said that another 100x scaling of AI wouldnât be that meaningful. Hassabis specifically names three breakthroughs that are needed: continual learning, world models, and System 2 thinking (reasoning, planning) â that last one seems like it might be more than a single research breakthrough, but this is how Hassabis frames the matter. Sutskever and Hassabis are the kind of AI capabilities optimists that people cite to bolster arguments for short timelines, and even theyâre saying this.
There are other world-class experts who say similar things, but they are better known as skeptics of LLMs. Yann LeCun (Meta AIâs departing Chief Scientist, won the Turing Award for his pioneering work in deep learning) and Richard Sutton (won the Turing Award for his pioneering work in reinforcement learning) have both argued that AGI or human-level AI will take a lot of fundamental research work. LeCun and Sutton have also both gone through the exceptional step of sketching out a research roadmap to AGI/âhuman-level AI, i.e., LeCunâs APTAMI and Sutton and co-authorsâ Alberta Plan. They are serious about this, and they are both actively working on this research.
Iâm not cherry-picking; this seems to be the majority view. According to a survey from early this year, 76% of AI experts donât think LLMs or other current AI techniques with scale to AGI.
I donât see this strain of argument as particularly action relevant. I feel like you are getting way to caught up in the abstractions of what âagiâ is and such. This is obviously a big deal, this is obviously going to happen âsoonâ and/âor already âhappeningâ, itâs obviously time to take this very serious and act like responsible adults.
Ok so you think âAGIâ is likely 5+ years away. Are you not worried about anthropic having a fiduciary responsibility to itâs shareholders to maximize profits? I guess reading between the lines you see very little value in slowing down or regulating AI? While leaving room for the chance that our whole disagreement does revolve around our object level timeline differences, I think you probably are missing the forrest from the trees here in your quest to prove the incorrectness of people with shorter timelines.
I am not a doom maximilist in the sense that I think this technology is already profoundly world-bending and scary today. I am worried about my cousin becoming a short form addicted goonbot with an AI best friend right nowâwhether or not robot bees are about to gorge my eyes out.
I think there are a reasonably long list of sensible regulations around this stuff (both x-risk related and more minor stuff) that would probably result in a large drawdown in these companies valuations and really the stock market at large. For example but not limited toâAI companionship, romance, porn should probably be on a pause right now while the government performs large scale AB testing, the same thing we should have done with social media and cellphone use especially in children that our government horribly failed to do because of its inability to utilitize RCTs and the absolute horrifying average age of our president and both houses of congress.
I was specifically responding to your assertion that no one should be able to confidently say X. There are world-class experts like Ilya Sutskever and Demis Hassabis who do confidently say X, and theyâre even on the bullish, optimistic end of the spectrum in terms of AI capabilities forecasts/âAGI forecasts, such that theyâre some of the public figures in AI that people cite when they want to make an argument for near-term AGI. I was only directly responding to that narrow point.
It doesnât really have anything to do with different specific definitions of AGI. Iâm not sure if Sutskever and Hassabis even define AGI the same way, for example. Itâs just what both of them have said about what it will take to get to AGI, which is what you specifically said no one should be able to confidently say X.
On your more general argument that itâs obvious AGI or something enough to AGI is obviously going to be developed soon, or has already been developed, well, no, I donât agree with that general argument. To try to quickly boil down the main cruxes of my counterargument, AI isnât that useful for anything and there are a lot of thorny research problems people have already been banging their heads against for years that we need to make progress on to make AI more useful.
But I was just trying to respond to your narrow point about no one being able to confidently say X. I wasnât trying to open up a general debate about near-term AGI (let alone about regulating the generative AI systems that currently exist). However, if youâre eager, I would be happy to have that debate in the comments of another post (e.g. any of the ones Iâve written on the topic, such as the two I just linked to).
Very prominent deep learning experts who are otherwise among the most bullish public figures in the world on AI such as Ilya Sutskever (AlexNet co-author, OpenAI co-founder and Chief Scientist, now runs Safe Superintelligence) and Demis Hassabis (DeepMind co-founder, Google DeepMind CEO, Nobel Prize winner for AI work) both say that multiple research breakthroughs are needed. Sutskever specifically said that another 100x scaling of AI wouldnât be that meaningful. Hassabis specifically names three breakthroughs that are needed: continual learning, world models, and System 2 thinking (reasoning, planning) â that last one seems like it might be more than a single research breakthrough, but this is how Hassabis frames the matter. Sutskever and Hassabis are the kind of AI capabilities optimists that people cite to bolster arguments for short timelines, and even theyâre saying this.
There are other world-class experts who say similar things, but they are better known as skeptics of LLMs. Yann LeCun (Meta AIâs departing Chief Scientist, won the Turing Award for his pioneering work in deep learning) and Richard Sutton (won the Turing Award for his pioneering work in reinforcement learning) have both argued that AGI or human-level AI will take a lot of fundamental research work. LeCun and Sutton have also both gone through the exceptional step of sketching out a research roadmap to AGI/âhuman-level AI, i.e., LeCunâs APTAMI and Sutton and co-authorsâ Alberta Plan. They are serious about this, and they are both actively working on this research.
Iâm not cherry-picking; this seems to be the majority view. According to a survey from early this year, 76% of AI experts donât think LLMs or other current AI techniques with scale to AGI.
I donât see this strain of argument as particularly action relevant. I feel like you are getting way to caught up in the abstractions of what âagiâ is and such. This is obviously a big deal, this is obviously going to happen âsoonâ and/âor already âhappeningâ, itâs obviously time to take this very serious and act like responsible adults.
Ok so you think âAGIâ is likely 5+ years away. Are you not worried about anthropic having a fiduciary responsibility to itâs shareholders to maximize profits? I guess reading between the lines you see very little value in slowing down or regulating AI? While leaving room for the chance that our whole disagreement does revolve around our object level timeline differences, I think you probably are missing the forrest from the trees here in your quest to prove the incorrectness of people with shorter timelines.
I am not a doom maximilist in the sense that I think this technology is already profoundly world-bending and scary today. I am worried about my cousin becoming a short form addicted goonbot with an AI best friend right nowâwhether or not robot bees are about to gorge my eyes out.
I think there are a reasonably long list of sensible regulations around this stuff (both x-risk related and more minor stuff) that would probably result in a large drawdown in these companies valuations and really the stock market at large. For example but not limited toâAI companionship, romance, porn should probably be on a pause right now while the government performs large scale AB testing, the same thing we should have done with social media and cellphone use especially in children that our government horribly failed to do because of its inability to utilitize RCTs and the absolute horrifying average age of our president and both houses of congress.
I was specifically responding to your assertion that no one should be able to confidently say X. There are world-class experts like Ilya Sutskever and Demis Hassabis who do confidently say X, and theyâre even on the bullish, optimistic end of the spectrum in terms of AI capabilities forecasts/âAGI forecasts, such that theyâre some of the public figures in AI that people cite when they want to make an argument for near-term AGI. I was only directly responding to that narrow point.
It doesnât really have anything to do with different specific definitions of AGI. Iâm not sure if Sutskever and Hassabis even define AGI the same way, for example. Itâs just what both of them have said about what it will take to get to AGI, which is what you specifically said no one should be able to confidently say X.
On your more general argument that itâs obvious AGI or something enough to AGI is obviously going to be developed soon, or has already been developed, well, no, I donât agree with that general argument. To try to quickly boil down the main cruxes of my counterargument, AI isnât that useful for anything and there are a lot of thorny research problems people have already been banging their heads against for years that we need to make progress on to make AI more useful.
But I was just trying to respond to your narrow point about no one being able to confidently say X. I wasnât trying to open up a general debate about near-term AGI (let alone about regulating the generative AI systems that currently exist). However, if youâre eager, I would be happy to have that debate in the comments of another post (e.g. any of the ones Iâve written on the topic, such as the two I just linked to).