I don’t have to tell you that scaling inputs like compute like money, compute, labour, and so on isn’t the same as scaling outputs like capabilities or intelligence. So, evidence that inputs have been increasing a lot is not evidence that outputs have been increasing a lot. We should avoid ambiguating between these two things.
I’m actually not convinced AI can drive a car today in any sense that was not also true 5 years ago or 10 years ago. I have followed the self-driving car industry closely and, internally, companies have a lot of metrics about safety and performance. These are closely held and rarely is anything disclosed to the public.
We also have no idea how much human labour is required in operating autonomous vehicle prototypes, e.g., how often a human has to intervene remotely.
Self-driving car companies are extremely secretive about the information that is the most interesting for judging technological progress. And they simultaneously have strong and aggressive PR and marketing. So, I’m skeptical. Especially since there is a history of companies like Cruise making aggressive, optimistic pronouncements and then abruptly announcing that the company is over.
Elon Musk has said full autonomy is one year away every year since 2015. That’s an extreme case, but others in the self-driving car industry have also set timelines and then blown past them.
Thanks for the all comments! Sorry for taking so long.
1) Yes, scaling inputs isn’t the same thing as scaling capabilities. That’s why the empirical success of the scaling hypothesis is/​was so surprising.
2) Thanks for the point about self-driving cars, we removed that from the article on site. I’m going to investigate this some more before re-instating the self-driving claim on site. We’ll update the article on the EA forum to reflect the current on-site article. (Though we won’t keep doing that.)
A friend independently made a related point to me, where he noted a self-driving ML engineer claimed that the reason for the city based limitation of self-driving taxis was because the AI relied on 3d-mesh maps for navigation. I’m kind of confused about why this would be a big limitation, as I thought things like NeRFs made it possible to quickly generate a high res voxel map for static shape feasible. And that most of the difficulty in self-driving was in dealing with edge cases e.g. a baby deer jumping into the road, rather than being aware of static shapes about you. But that’s my confusion, and I’d rather not stick my neck out on a claim on which I’m not an expert, which experts do contest.
3) Mostly, I was basing these claims off Waymo, and their stats and anecdotal reports from folks in SF regarding how well they perform. When I looked into Waymo’s statistics for average miles w/​o human interruption, and at first blush they looked quite impressive (17k miles!). But the one niggling worry I had was that they said humans feedback wasn’t always written down as an interruption.
I don’t have to tell you that scaling inputs like compute like money, compute, labour, and so on isn’t the same as scaling outputs like capabilities or intelligence. So, evidence that inputs have been increasing a lot is not evidence that outputs have been increasing a lot. We should avoid ambiguating between these two things.
I’m actually not convinced AI can drive a car today in any sense that was not also true 5 years ago or 10 years ago. I have followed the self-driving car industry closely and, internally, companies have a lot of metrics about safety and performance. These are closely held and rarely is anything disclosed to the public.
We also have no idea how much human labour is required in operating autonomous vehicle prototypes, e.g., how often a human has to intervene remotely.
Self-driving car companies are extremely secretive about the information that is the most interesting for judging technological progress. And they simultaneously have strong and aggressive PR and marketing. So, I’m skeptical. Especially since there is a history of companies like Cruise making aggressive, optimistic pronouncements and then abruptly announcing that the company is over.
Elon Musk has said full autonomy is one year away every year since 2015. That’s an extreme case, but others in the self-driving car industry have also set timelines and then blown past them.
Hi!
Thanks for the all comments! Sorry for taking so long.
1) Yes, scaling inputs isn’t the same thing as scaling capabilities. That’s why the empirical success of the scaling hypothesis is/​was so surprising.
2) Thanks for the point about self-driving cars, we removed that from the article on site. I’m going to investigate this some more before re-instating the self-driving claim on site. We’ll update the article on the EA forum to reflect the current on-site article. (Though we won’t keep doing that.)
A friend independently made a related point to me, where he noted a self-driving ML engineer claimed that the reason for the city based limitation of self-driving taxis was because the AI relied on 3d-mesh maps for navigation. I’m kind of confused about why this would be a big limitation, as I thought things like NeRFs made it possible to quickly generate a high res voxel map for static shape feasible. And that most of the difficulty in self-driving was in dealing with edge cases e.g. a baby deer jumping into the road, rather than being aware of static shapes about you. But that’s my confusion, and I’d rather not stick my neck out on a claim on which I’m not an expert, which experts do contest.
3) Mostly, I was basing these claims off Waymo, and their stats and anecdotal reports from folks in SF regarding how well they perform. When I looked into Waymo’s statistics for average miles w/​o human interruption, and at first blush they looked quite impressive (17k miles!). But the one niggling worry I had was that they said humans feedback wasn’t always written down as an interruption.