I don’t know what Andrej Karpathy’s actual timeline for AGI is. In the Dwarkesh Patel interview that everyone has been citing, Karpathy says he thinks it’s a decade until we get useful AI agents, not AGI. This implies he thinks AGI is at least a decade away, but he doesn’t actually directly address when he thinks AGI will arrive.
After the interview, Karpathy made a clarification on Twitter where he said 10 years to AGI should come across to people as highly optimistic in the grand scheme of things, which maybe implies he does actually think AGI is 10 years away and will arrive at the same time as useful AI agents. However, it’s ambiguous enough I would hesitate to interpret it one way or another.
I could be wrong, but I didn’t get the impression that continual learning or online learning was Karpathy’s main reason (let alone sole reason) for thinking useful AI agents are a decade away, or for his other comments that express skepticism or pessimism — relative to people with 5-year AGI timelines — about progress in AI or AI capabilities.
Continual learning/​online learning is not one of the main issues raised in my post and while I think it is an important issue, you can hand-wave away continual learning and still have problems with scaling limits, learning from video data, human examples to imitation learn from, data inefficiency, and generalization.
It’s not just Andrej Karpathy but a number of other prominent AI researchers, such as François Chollet, Yann LeCun, and Richard Sutton, who have publicly raised objections to the idea that very near-term AGI is very likely via scaling LLMs. In fact, in the preamble of my post I linked to a previous post of mine where I discuss how a survey of AI researchers found they have a median timeline for AGI of over 20 years (and possibly much, much longer than 20 years, depending how you interpret the survey), and how, in another survey, 76% of AI experts surveyed think scaling LLMs or other current techniques is unlikely or very unlikely to reach AGI. I’m not defending a fringe, minority position in the AI world, but in fact something much closer to the majority view than what you typically see on the EA Forum.
I don’t know what Andrej Karpathy’s actual timeline for AGI is. In the Dwarkesh Patel interview that everyone has been citing, Karpathy says he thinks it’s a decade until we get useful AI agents, not AGI. This implies he thinks AGI is at least a decade away, but he doesn’t actually directly address when he thinks AGI will arrive.
After the interview, Karpathy made a clarification on Twitter where he said 10 years to AGI should come across to people as highly optimistic in the grand scheme of things, which maybe implies he does actually think AGI is 10 years away and will arrive at the same time as useful AI agents. However, it’s ambiguous enough I would hesitate to interpret it one way or another.
I could be wrong, but I didn’t get the impression that continual learning or online learning was Karpathy’s main reason (let alone sole reason) for thinking useful AI agents are a decade away, or for his other comments that express skepticism or pessimism — relative to people with 5-year AGI timelines — about progress in AI or AI capabilities.
Continual learning/​online learning is not one of the main issues raised in my post and while I think it is an important issue, you can hand-wave away continual learning and still have problems with scaling limits, learning from video data, human examples to imitation learn from, data inefficiency, and generalization.
It’s not just Andrej Karpathy but a number of other prominent AI researchers, such as François Chollet, Yann LeCun, and Richard Sutton, who have publicly raised objections to the idea that very near-term AGI is very likely via scaling LLMs. In fact, in the preamble of my post I linked to a previous post of mine where I discuss how a survey of AI researchers found they have a median timeline for AGI of over 20 years (and possibly much, much longer than 20 years, depending how you interpret the survey), and how, in another survey, 76% of AI experts surveyed think scaling LLMs or other current techniques is unlikely or very unlikely to reach AGI. I’m not defending a fringe, minority position in the AI world, but in fact something much closer to the majority view than what you typically see on the EA Forum.