Self-driving cars are not close to getting solved. Donât take my word for it. Listen to Andrej Karpathy, the lead AI researcher responsible for the development of Teslaâs Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term âvibe codingâ.)
From Karpathyâs October 17, 2025 interview with Dwarkesh Patel:
Dwarkesh Patel01:42:55
Youâve talked about how you were at Tesla leading self-driving from 2017 to 2022. And you firsthand saw this progress from cool demos to now thousands of cars out there actually autonomously doing drives. Why did that take a decade? What was happening through that time?
Andrej Karpathy01:43:11
One thing I will almost instantly push back on is that this is not even near done, in a bunch of ways that Iâm going to get to. Self-driving is very interesting because itâs definitely where I get a lot of my intuitions because I spent five years on it. It has this entire history where the first demos of self-driving go all the way to the 1980s. You can see a demo from CMU in 1986. Thereâs a truck thatâs driving itself on roads.
Fast forward. When I was joining Tesla, I had a very early demo of Waymo. It basically gave me a perfect drive in 2014 or something like that, so a perfect Waymo drive a decade ago. It took us around Palo Alto and so on because I had a friend who worked there. I thought it was very close and then it still took a long time.
For some kinds of tasks and jobs and so on, thereâs a very large demo-to-product gap where the demo is very easy, but the product is very hard. Itâs especially the case in cases like self-driving where the cost of failure is too high. Many industries, tasks, and jobs maybe donât have that property, but when you do have that property, that definitely increases the timelines.
For example, in software engineering, I do think that property does exist. For a lot of vibe coding, it doesnât. But if youâre writing actual production-grade code, that property should exist, because any kind of mistake leads to a security vulnerability or something like that. Millions and hundreds of millions of peopleâs personal Social Security numbers get leaked or something like that. So in software, people should be careful, kind of like in self-driving. In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, itâs almost unbounded how terrible something could be.
I do think that they share that property. What takes the long amount of time and the way to think about it is that itâs a march of nines. Every single nine is a constant amount of work. Every single nine is the same amount of work. When you get a demo and something works 90% of the time, thatâs just the first nine. Then you need the second nine, a third nine, a fourth nine, a fifth nine. While I was at Tesla for five years or so, we went through maybe three nines or two nines. I donât know what it is, but multiple nines of iteration. There are still more nines to go.
Thatâs why these things take so long. Itâs definitely formative for me, seeing something that was a demo. Iâm very unimpressed by demos. Whenever I see demos of anything, Iâm extremely unimpressed by that. If itâs a demo that someone cooked up just to show you, itâs worse. If you can interact with it, itâs a bit better. But even then, youâre not done. You need the actual product. Itâs going to face all these challenges when it comes in contact with reality and all these different pockets of behavior that need patching.
Weâre going to see all this stuff play out. Itâs a march of nines. Each nine is constant. Demos are encouraging. Itâs still a huge amount of work to do. It is a critical safety domain, unless youâre doing vibe coding, which is all nice and fun and so on. Thatâs why this also enforced my timelines from that perspective.
Karpathy elaborated later in the interview:
The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo and so on has very few cars. Theyâre doing that roughly speaking because theyâre not economical. Theyâve built something that lives in the future. Theyâve had to pull back the future, but they had to make it uneconomical. There are all these costs, not just marginal costs for those cars and their operation and maintenance, but also the capex of the entire thing. Making it economical is still going to be a slog for them.
Also, when you look at these cars and thereâs no one driving, I actually think itâs a little bit deceiving because there are very elaborate teleoperation centers of people kind of in a loop with these cars. I donât have the full extent of it, but thereâs more human-in-the-loop than you might expect. There are people somewhere out there beaming in from the sky. I donât know if theyâre fully in the loop with the driving. Some of the time they are, but theyâre certainly involved and there are people. In some sense, we havenât actually removed the person, weâve moved them to somewhere where you canât see them.
I still think there will be some work, as you mentioned, going from environment to environment. There are still challenges to make self-driving real. But I do agree that itâs definitely crossed a threshold where it kind of feels real, unless itâs really teleoperated. For example, Waymo canât go to all the different parts of the city. My suspicion is that itâs parts of the city where you donât get good signal. Anyway, I donât know anything about the stack. Iâm just making stuff up.
Dwarkesh Patel01:50:23
You led self-driving for five years at Tesla.
Andrej Karpathy01:50:27
Sorry, I donât know anything about the specifics of Waymo. By the way, I love Waymo and I take it all the time. I just think that people are sometimes a little bit too naive about some of the progress and thereâs still a huge amount of work. Tesla took in my mind a much more scalable approach and the team is doing extremely well. Iâm kind of on the record for predicting how this thing will go. Waymo had an early start because you can package up so many sensors. But I do think Tesla is taking the more scalable strategy and itâs going to look a lot more like that. So this will still have to play out and hasnât. But I donât want to talk about self-driving as something that took a decade because it didnât take it yet, if that makes sense.
Dwarkesh Patel01:51:08
Because one, the start is at 1980 and not 10 years ago, and then two, the end is not here yet.
Andrej Karpathy01:51:14
The end is not near yet because when weâre talking about self-driving, usually in my mind itâs self-driving at scale. People donât have to get a driverâs license, etc.
I hope the implication for discussions around AGI timelines is clear.
Self-driving cars are not close to getting solved. Donât take my word for it. Listen to Andrej Karpathy, the lead AI researcher responsible for the development of Teslaâs Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term âvibe codingâ.)
From Karpathyâs October 17, 2025 interview with Dwarkesh Patel:
Karpathy elaborated later in the interview:
I hope the implication for discussions around AGI timelines is clear.