Why I disagree that this video insightful/entertaining: The YouTuber quite clearly has very little knowledge of the subject they are discussing—it’s actually quite reasonable for the Zoom CEO to simply say that fixing hallucinations will “occur down the stack”, given that they are not the ones developing AI models, and would instead be building the infrastructure and environments that the AI systems operate within.
From what I watched of the video, she also completely misses the real reason that the CEOs claims are ridiculous; if you have an AI system with a level of capability that allows it to replicate a person’s actions in the workplace, then why would we go to the extra effort of having Zoom calls between these AI clones?
I.e. It would be much more efficient to build information systems that align with the strengths & comparative advantages of the AI systems - presumably this would not involve having “realistic clones of real human workers” talking to each other, but rather a network of AI systems that communicate using protocols and data formats that are designed to be as robust and efficient as possible.
FWIW if I were the CEO of Zoom, I’d be pushing hard on the “Human-in-the-loop” idea. E.g. building in features that allow you send out AI agents to fetch information and complete tasks in real time as you’re having meetings with your colleagues. That would actually be a useful product that helps keep Zoom interesting and relevant.
With regards to AI progress stalling, I think it depends on what you mean by “stalling”, but I think this is basically impossible if you mean “literally will not meaningfully improve in a way that is economically useful”
When I first learned how modern AI systems worked, I was astonished at how absurdly simple and inefficient they are. In the last ~2 years there has been a move towards things like MoE architectures & RNN hybrids, but this is really only scratching the surface of what is possible with more complex architectures. We should expect a steady stream of algorithmic improvements that will push down inference costs and make more real-world applications viable. There’s also Moore’s Law, but everyone already talks about that quite a lot.
Also, if you buy the idea that “AI systems will learn tasks that they’re explicitly trained for”, then incremental progress is almost guaranteed. I think it’s hilarious that everyone in industry and Government is very excited about general-purpose AI and its capacity for automation, but there is basically no large-scale effort to create high-quality training data to expedite this process.
The fact that pre-training + chatbot RLHF is adequate to build a system with any economic value is dumb luck. I would predict that if we actually dedicated a not-insignificant chunk of society’s efforts towards training DL systems to perform important tasks, we would make quite a lot of progress very quickly. Perhaps a central actor like the CCP will do this at some stage, but until then we should expect incremental progress as small-scale efforts gradually build up datasets and training environments.
Why I disagree that this video insightful/entertaining: The YouTuber quite clearly has very little knowledge of the subject they are discussing—it’s actually quite reasonable for the Zoom CEO to simply say that fixing hallucinations will “occur down the stack”, given that they are not the ones developing AI models, and would instead be building the infrastructure and environments that the AI systems operate within.
From what I watched of the video, she also completely misses the real reason that the CEOs claims are ridiculous; if you have an AI system with a level of capability that allows it to replicate a person’s actions in the workplace, then why would we go to the extra effort of having Zoom calls between these AI clones?
I.e. It would be much more efficient to build information systems that align with the strengths & comparative advantages of the AI systems - presumably this would not involve having “realistic clones of real human workers” talking to each other, but rather a network of AI systems that communicate using protocols and data formats that are designed to be as robust and efficient as possible.
FWIW if I were the CEO of Zoom, I’d be pushing hard on the “Human-in-the-loop” idea. E.g. building in features that allow you send out AI agents to fetch information and complete tasks in real time as you’re having meetings with your colleagues. That would actually be a useful product that helps keep Zoom interesting and relevant.
With regards to AI progress stalling, I think it depends on what you mean by “stalling”, but I think this is basically impossible if you mean “literally will not meaningfully improve in a way that is economically useful”
When I first learned how modern AI systems worked, I was astonished at how absurdly simple and inefficient they are. In the last ~2 years there has been a move towards things like MoE architectures & RNN hybrids, but this is really only scratching the surface of what is possible with more complex architectures. We should expect a steady stream of algorithmic improvements that will push down inference costs and make more real-world applications viable. There’s also Moore’s Law, but everyone already talks about that quite a lot.
Also, if you buy the idea that “AI systems will learn tasks that they’re explicitly trained for”, then incremental progress is almost guaranteed. I think it’s hilarious that everyone in industry and Government is very excited about general-purpose AI and its capacity for automation, but there is basically no large-scale effort to create high-quality training data to expedite this process.
The fact that pre-training + chatbot RLHF is adequate to build a system with any economic value is dumb luck. I would predict that if we actually dedicated a not-insignificant chunk of society’s efforts towards training DL systems to perform important tasks, we would make quite a lot of progress very quickly. Perhaps a central actor like the CCP will do this at some stage, but until then we should expect incremental progress as small-scale efforts gradually build up datasets and training environments.