Safety is only one component of overall driving competence. A parked car is 100% safe. Even if it is true that autonomous cars are safer than human drivers, they aren’t as competent as human drivers overall.
Incidentally, I’m pretty familiar with the autonomous driving industry and I’ve spent countless hours looking into such claims. I even once paid someone with a PhD in a relevant field to help me analyze some data to try to come to a conclusion. (The result was there wasn’t enough data to draw a conclusion.) What I’ve found is that autonomous driving companies are incredibly secretive about the data they keep on safety and other kinds of driving performance. They have aggressive PR and marketing, but they won’t actually publish the data that will allow third-parties to independently audit how safe their AI vehicles are.
Besides just not having the data, there are the additional complications of 1) aggressive geofencing to artificially constrain the problem and make it easier (just like a parked car is 100% safe, a car slowly circling a closed track would also be almost 100% safe) and 2) humans in the loop, either physically inside the car or remotely.[1]
The most important thing to know is that you can’t trust these companies’ PR and marketing. Autonomous vehicle companies will be happy to say their cars are superhuman right up until the day they announce they’re shutting down. It’s like Soviet propagandists saying communism is going great in 1988. But also, no, you can’t look at their economic data.
I’ll have to look at that safety report later and see what the responses are to it. At a glance, this seems to be a bigger and more rigorous disclosure than what I’ve seen previously and Waymo has taken the extra step of publishing in a journal.
[Edit, added on October 20, 2025 at 12:40pm Eastern: There are probably going to be limitations with any safety data and we shouldn’t expect perfection, nor should that get in the way of us lauding companies for being more open with their safety data. However, just one thing to think about: if autonomous vehicles are geofenced to safer areas but they’re being compared to humans driving in all areas, ranging from the safest to the most dangerous, then this isn’t a strict apples-to-apples comparison.]
However, I’m not ready to jump to any conclusions just yet because it was a similar report by Waymo (not published in a journal, however) that I paid someone with a PhD in a relevant field to help me analyze and, despite Waymo’s report initially looking promising and interesting to me, that person’s conclusion was that there was not enough data to actually make a determination one way or the other whether Waymo’s autonomous vehicles were actually safer than the average human driver.
I was coming at that report from the perspective of wanting it to show that Waymo’s vehicles were safer than human drivers (although I didn’t tell the person with the PhD that because I didn’t want to bias them). I was disappointed that the result was inconclusive.
If it turns out Waymo’s autonomous vehicles are indeed safer than the average human driver, I would celebrate that. Sadly, however, it would not really make me feel more than marginally more optimistic about the near-term prospects of autonomous vehicle technology for widespread commercialization.
The bigger problem for this overall argument about autonomous vehicles (that they show data efficiency or the ability to deal with novelty isn’t important) is that safety is only one component of competence (as I said, a parked car is 100% safe) and autonomous vehicles are not as competent as human drivers overall. If they were, there would be a huge commercial opportunity in automating human driving in a widespread fashion — by some estimations, possibly the largest commercial opportunity in the history of capitalism. The reason this can’t be done is not regulatory or social or anything like that. It’s because the technology simply can’t do the job.
The technology as it’s deployed today is not only helped along by geofencing, it’s also supported by a high ratio of human labour to the amount of autonomous driving. That’s not only safety drivers in the car or remote monitors and operators, but also engineers doing a lot of special casing for specific driving environments.
If you want to use autonomous vehicles as an example of AI automating significant human labour, first they would have to automate significant human labour — practically, not just in theory — but that hasn’t happened yet.
Moreover, driving should, at least in theory, be a low bar. Driving is considered to be routine, boring, repetitive, not particularly complex — exactly the sort of thing we would think should be easier to automate. So, if approaches to AI that have low data efficiency and don’t deal well with novelty can’t even handle driving, then it stands to reason that more complex forms of human labour such as science, philosophy, journalism, politics, economics, management, social work, and so on would be even less susceptible to automation by these approaches.
Just to be clear on this point: if we had a form of AI that could drive cars, load dishwashers, and work an assembly line but not do those other things (like science, etc.), I think that would be wonderful and it would certainly be economically transformative, but it wouldn’t be AGI.
Edited to add on October 20, 2025 at 12:30pm Eastern:
Don’t take my word for it. Andrej Karpathy, an AI researcher formerly at OpenAI who led Tesla’s autonomous driving AI from 2017 to 2022, recently said on a podcast that he doesn’t think fully autonomous driving is nearly solved yet:
…self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo and so on has very few cars. … Also, when you look at these cars and there’s no one driving, I actually think it’s a little bit deceiving because there are very elaborate teleoperation centers of people kind of in a loop with these cars. I don’t have the full extent of it, but there’s more human-in-the-loop than you might expect. There are people somewhere out there beaming in from the sky. I don’t know if they’re fully in the loop with the driving. Some of the time they are, but they’re certainly involved and there are people. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.
For autonomous driving, current approaches which “can’t deal with novelty” are already far safer than human drivers.
Safety is only one component of overall driving competence. A parked car is 100% safe. Even if it is true that autonomous cars are safer than human drivers, they aren’t as competent as human drivers overall.
Incidentally, I’m pretty familiar with the autonomous driving industry and I’ve spent countless hours looking into such claims. I even once paid someone with a PhD in a relevant field to help me analyze some data to try to come to a conclusion. (The result was there wasn’t enough data to draw a conclusion.) What I’ve found is that autonomous driving companies are incredibly secretive about the data they keep on safety and other kinds of driving performance. They have aggressive PR and marketing, but they won’t actually publish the data that will allow third-parties to independently audit how safe their AI vehicles are.
Besides just not having the data, there are the additional complications of 1) aggressive geofencing to artificially constrain the problem and make it easier (just like a parked car is 100% safe, a car slowly circling a closed track would also be almost 100% safe) and 2) humans in the loop, either physically inside the car or remotely.[1]
The most important thing to know is that you can’t trust these companies’ PR and marketing. Autonomous vehicle companies will be happy to say their cars are superhuman right up until the day they announce they’re shutting down. It’s like Soviet propagandists saying communism is going great in 1988. But also, no, you can’t look at their economic data.
Edited on October 20, 2025 at 12:35pm Eastern to add: See the footnote added to my comment above for Andrej Karpathy’s recent comments on this.
You’re right, they made the problem easier with geofencing, but the data from Waymo isn’t ambiguous, and despite your previous investigations, is now published https://storage.googleapis.com/waymo-uploads/files/documents/safety/Safety%20Impact%20Crash%20Type%20Manuscript.pdf
This example makes it clear that the approach works to automate significant human labor, with some investment, without solving AGI.
I’ll have to look at that safety report later and see what the responses are to it. At a glance, this seems to be a bigger and more rigorous disclosure than what I’ve seen previously and Waymo has taken the extra step of publishing in a journal.
[Edit, added on October 20, 2025 at 12:40pm Eastern: There are probably going to be limitations with any safety data and we shouldn’t expect perfection, nor should that get in the way of us lauding companies for being more open with their safety data. However, just one thing to think about: if autonomous vehicles are geofenced to safer areas but they’re being compared to humans driving in all areas, ranging from the safest to the most dangerous, then this isn’t a strict apples-to-apples comparison.]
However, I’m not ready to jump to any conclusions just yet because it was a similar report by Waymo (not published in a journal, however) that I paid someone with a PhD in a relevant field to help me analyze and, despite Waymo’s report initially looking promising and interesting to me, that person’s conclusion was that there was not enough data to actually make a determination one way or the other whether Waymo’s autonomous vehicles were actually safer than the average human driver.
I was coming at that report from the perspective of wanting it to show that Waymo’s vehicles were safer than human drivers (although I didn’t tell the person with the PhD that because I didn’t want to bias them). I was disappointed that the result was inconclusive.
If it turns out Waymo’s autonomous vehicles are indeed safer than the average human driver, I would celebrate that. Sadly, however, it would not really make me feel more than marginally more optimistic about the near-term prospects of autonomous vehicle technology for widespread commercialization.
The bigger problem for this overall argument about autonomous vehicles (that they show data efficiency or the ability to deal with novelty isn’t important) is that safety is only one component of competence (as I said, a parked car is 100% safe) and autonomous vehicles are not as competent as human drivers overall. If they were, there would be a huge commercial opportunity in automating human driving in a widespread fashion — by some estimations, possibly the largest commercial opportunity in the history of capitalism. The reason this can’t be done is not regulatory or social or anything like that. It’s because the technology simply can’t do the job.
The technology as it’s deployed today is not only helped along by geofencing, it’s also supported by a high ratio of human labour to the amount of autonomous driving. That’s not only safety drivers in the car or remote monitors and operators, but also engineers doing a lot of special casing for specific driving environments.
If you want to use autonomous vehicles as an example of AI automating significant human labour, first they would have to automate significant human labour — practically, not just in theory — but that hasn’t happened yet.
Moreover, driving should, at least in theory, be a low bar. Driving is considered to be routine, boring, repetitive, not particularly complex — exactly the sort of thing we would think should be easier to automate. So, if approaches to AI that have low data efficiency and don’t deal well with novelty can’t even handle driving, then it stands to reason that more complex forms of human labour such as science, philosophy, journalism, politics, economics, management, social work, and so on would be even less susceptible to automation by these approaches.
Just to be clear on this point: if we had a form of AI that could drive cars, load dishwashers, and work an assembly line but not do those other things (like science, etc.), I think that would be wonderful and it would certainly be economically transformative, but it wouldn’t be AGI.
Edited to add on October 20, 2025 at 12:30pm Eastern:
Don’t take my word for it. Andrej Karpathy, an AI researcher formerly at OpenAI who led Tesla’s autonomous driving AI from 2017 to 2022, recently said on a podcast that he doesn’t think fully autonomous driving is nearly solved yet: