My understanding was that self-driving cars are already less likely to get into accidents than humans are.
However, they certainly can’t “drive autonomously wherever and whenever a typical human driver could”, requiring a costly process to adapt current self-driving technology to each city one at a time.
What does this tell us about how far from AGI? In particular, should this make us less enthusiastic about the generative AI direction than we might otherwise be? If it’s so powerful, shouldn’t we be able to use it to solve self-driving?
I guess it doesn’t feel to me that we should make a huge update on this because anyone who is at all familiar with generative AI should already know it is incredibly unreliable without having to bring self-driving cars into the equation.
The question then becomes how insurmountable the unreliability problem is. There are certainly challenges here, but it’s not clear that it is insurmountable. The short-timelines scenarios are pretty much always contingent on us discovering some kind of self-reinforcing loops. Is this likely? It’s hard to tell, but there are already very basic techniques like self-consistency or reinforcement learning from AI feedback, so it isn’t completely implausible. And it’s not really clear to me why the lack of self-driving cars at present is a strong reason to believe that attempts to set up such a loop will fail.
Great question!
My understanding was that self-driving cars are already less likely to get into accidents than humans are.
However, they certainly can’t “drive autonomously wherever and whenever a typical human driver could”, requiring a costly process to adapt current self-driving technology to each city one at a time.
What does this tell us about how far from AGI? In particular, should this make us less enthusiastic about the generative AI direction than we might otherwise be? If it’s so powerful, shouldn’t we be able to use it to solve self-driving?
I guess it doesn’t feel to me that we should make a huge update on this because anyone who is at all familiar with generative AI should already know it is incredibly unreliable without having to bring self-driving cars into the equation.
The question then becomes how insurmountable the unreliability problem is. There are certainly challenges here, but it’s not clear that it is insurmountable. The short-timelines scenarios are pretty much always contingent on us discovering some kind of self-reinforcing loops. Is this likely? It’s hard to tell, but there are already very basic techniques like self-consistency or reinforcement learning from AI feedback, so it isn’t completely implausible. And it’s not really clear to me why the lack of self-driving cars at present is a strong reason to believe that attempts to set up such a loop will fail.