Thanks for the all comments! Sorry for taking so long.
1) Yes, scaling inputs isn’t the same thing as scaling capabilities. That’s why the empirical success of the scaling hypothesis is/was so surprising.
2) Thanks for the point about self-driving cars, we removed that from the article on site. I’m going to investigate this some more before re-instating the self-driving claim on site. We’ll update the article on the EA forum to reflect the current on-site article. (Though we won’t keep doing that.)
A friend independently made a related point to me, where he noted a self-driving ML engineer claimed that the reason for the city based limitation of self-driving taxis was because the AI relied on 3d-mesh maps for navigation. I’m kind of confused about why this would be a big limitation, as I thought things like NeRFs made it possible to quickly generate a high res voxel map for static shape feasible. And that most of the difficulty in self-driving was in dealing with edge cases e.g. a baby deer jumping into the road, rather than being aware of static shapes about you. But that’s my confusion, and I’d rather not stick my neck out on a claim on which I’m not an expert, which experts do contest.
3) Mostly, I was basing these claims off Waymo, and their stats and anecdotal reports from folks in SF regarding how well they perform. When I looked into Waymo’s statistics for average miles w/o human interruption, and at first blush they looked quite impressive (17k miles!). But the one niggling worry I had was that they said humans feedback wasn’t always written down as an interruption.
Hi!
Thanks for the all comments! Sorry for taking so long.
1) Yes, scaling inputs isn’t the same thing as scaling capabilities. That’s why the empirical success of the scaling hypothesis is/was so surprising.
2) Thanks for the point about self-driving cars, we removed that from the article on site. I’m going to investigate this some more before re-instating the self-driving claim on site. We’ll update the article on the EA forum to reflect the current on-site article. (Though we won’t keep doing that.)
A friend independently made a related point to me, where he noted a self-driving ML engineer claimed that the reason for the city based limitation of self-driving taxis was because the AI relied on 3d-mesh maps for navigation. I’m kind of confused about why this would be a big limitation, as I thought things like NeRFs made it possible to quickly generate a high res voxel map for static shape feasible. And that most of the difficulty in self-driving was in dealing with edge cases e.g. a baby deer jumping into the road, rather than being aware of static shapes about you. But that’s my confusion, and I’d rather not stick my neck out on a claim on which I’m not an expert, which experts do contest.
3) Mostly, I was basing these claims off Waymo, and their stats and anecdotal reports from folks in SF regarding how well they perform. When I looked into Waymo’s statistics for average miles w/o human interruption, and at first blush they looked quite impressive (17k miles!). But the one niggling worry I had was that they said humans feedback wasn’t always written down as an interruption.