I donāt think Karpathy would describe his view as involving any sort of discontinuity in AI development. If anything his views are the most central no-discontinuity straight-lines-on-graphes view (no intelligence explosion accelerating the trends, no winter decelerating the trends). And if you think the mean date for AGI is 2035 then it would take extreme confidence (on the order of variance of less than a year) to claim AGI is less than 0.1% likely by 2032!
I was only mentioning Karpathy as someone reasonable who repeatedly points out the lack of online learning and seems to have (somewhat) longer timelines because of that. This is solely based on my general impression. I agree the stated probabilities seem wildly overconfident.
I donāt know what Andrej Karpathyās actual timeline for AGI is. In the Dwarkesh Patel interview that everyone has been citing, Karpathy says he thinks itās a decade until we get useful AI agents, not AGI. This implies he thinks AGI is at least a decade away, but he doesnāt actually directly address when he thinks AGI will arrive.
After the interview, Karpathy made a clarification on Twitter where he said 10 years to AGI should come across to people as highly optimistic in the grand scheme of things, which maybe implies he does actually think AGI is 10 years away and will arrive at the same time as useful AI agents. However, itās ambiguous enough I would hesitate to interpret it one way or another.
I could be wrong, but I didnāt get the impression that continual learning or online learning was Karpathyās main reason (let alone sole reason) for thinking useful AI agents are a decade away, or for his other comments that express skepticism or pessimism ā relative to people with 5-year AGI timelines ā about progress in AI or AI capabilities.
Continual learning/āonline learning is not one of the main issues raised in my post and while I think it is an important issue, you can hand-wave away continual learning and still have problems with scaling limits, learning from video data, human examples to imitation learn from, data inefficiency, and generalization.
Itās not just Andrej Karpathy but a number of other prominent AI researchers, such as FranƧois Chollet, Yann LeCun, and Richard Sutton, who have publicly raised objections to the idea that very near-term AGI is very likely via scaling LLMs. In fact, in the preamble of my post I linked to a previous post of mine where I discuss how a survey of AI researchers found they have a median timeline for AGI of over 20 years (and possibly much, much longer than 20 years, depending how you interpret the survey), and how, in another survey, 76% of AI experts surveyed think scaling LLMs or other current techniques is unlikely or very unlikely to reach AGI. Iām not defending a fringe, minority position in the AI world, but in fact something much closer to the majority view than what you typically see on the EA Forum.
I donāt think Karpathy would describe his view as involving any sort of discontinuity in AI development. If anything his views are the most central no-discontinuity straight-lines-on-graphes view (no intelligence explosion accelerating the trends, no winter decelerating the trends). And if you think the mean date for AGI is 2035 then it would take extreme confidence (on the order of variance of less than a year) to claim AGI is less than 0.1% likely by 2032!
I was only mentioning Karpathy as someone reasonable who repeatedly points out the lack of online learning and seems to have (somewhat) longer timelines because of that. This is solely based on my general impression. I agree the stated probabilities seem wildly overconfident.
I donāt know what Andrej Karpathyās actual timeline for AGI is. In the Dwarkesh Patel interview that everyone has been citing, Karpathy says he thinks itās a decade until we get useful AI agents, not AGI. This implies he thinks AGI is at least a decade away, but he doesnāt actually directly address when he thinks AGI will arrive.
After the interview, Karpathy made a clarification on Twitter where he said 10 years to AGI should come across to people as highly optimistic in the grand scheme of things, which maybe implies he does actually think AGI is 10 years away and will arrive at the same time as useful AI agents. However, itās ambiguous enough I would hesitate to interpret it one way or another.
I could be wrong, but I didnāt get the impression that continual learning or online learning was Karpathyās main reason (let alone sole reason) for thinking useful AI agents are a decade away, or for his other comments that express skepticism or pessimism ā relative to people with 5-year AGI timelines ā about progress in AI or AI capabilities.
Continual learning/āonline learning is not one of the main issues raised in my post and while I think it is an important issue, you can hand-wave away continual learning and still have problems with scaling limits, learning from video data, human examples to imitation learn from, data inefficiency, and generalization.
Itās not just Andrej Karpathy but a number of other prominent AI researchers, such as FranƧois Chollet, Yann LeCun, and Richard Sutton, who have publicly raised objections to the idea that very near-term AGI is very likely via scaling LLMs. In fact, in the preamble of my post I linked to a previous post of mine where I discuss how a survey of AI researchers found they have a median timeline for AGI of over 20 years (and possibly much, much longer than 20 years, depending how you interpret the survey), and how, in another survey, 76% of AI experts surveyed think scaling LLMs or other current techniques is unlikely or very unlikely to reach AGI. Iām not defending a fringe, minority position in the AI world, but in fact something much closer to the majority view than what you typically see on the EA Forum.