That did come across to me when I watched the interview. For example, in my summary:
Sutskever specifically predicts that another 100x scaling of AI models would make a difference, but would not transform AI capabilities.
He was cagey about his specific ideas on the āsomething importantā that āwill continue to be missingā. He said his company is working on it, but he canāt disclose details.
I find this secrecy to be a bit lame. I like when companies like DeepMind publish replicable research or, better yet, open source code and datasets. Even if you donāt want to go that far, itās possible to talk about ideas in general terms without giving away the trade secrets that would make them easy to copy.
Most of the startups that have focused primarily on ambitious fundamental AI research ā Vicarious and Numenta are the two examples Iām thinking of ā have not ended up successfully productizing any of their research (so far). DeepMindās done amazing work, but the first AI model they developed with major practical usefulness was AlphaFold, six years after their acquisition by Google and ten years after their founding, and they didnāt release a major product until DeepMind merged with Google Brain in 2023 and worked on Gemini. Itās more likely than not that a research-focused startup like Sutskeverās company, Safe Superintelligence, will not have any lucrative, productizable ideas, at least not for a long time, than they will have ideas so great than even just disclosing their general contours will cause other companies to steal away their competitive advantage.
My guess is that Safe Superintelligence doesnāt yet have any fantastic ideas that OpenAI, DeepMind, and others donāt also have, and the secrecy just as conveniently covers for that fact as it protects the companyās trade secrets or IP.
Ilya Sutskever today on X:
That did come across to me when I watched the interview. For example, in my summary:
He was cagey about his specific ideas on the āsomething importantā that āwill continue to be missingā. He said his company is working on it, but he canāt disclose details.
I find this secrecy to be a bit lame. I like when companies like DeepMind publish replicable research or, better yet, open source code and datasets. Even if you donāt want to go that far, itās possible to talk about ideas in general terms without giving away the trade secrets that would make them easy to copy.
Most of the startups that have focused primarily on ambitious fundamental AI research ā Vicarious and Numenta are the two examples Iām thinking of ā have not ended up successfully productizing any of their research (so far). DeepMindās done amazing work, but the first AI model they developed with major practical usefulness was AlphaFold, six years after their acquisition by Google and ten years after their founding, and they didnāt release a major product until DeepMind merged with Google Brain in 2023 and worked on Gemini. Itās more likely than not that a research-focused startup like Sutskeverās company, Safe Superintelligence, will not have any lucrative, productizable ideas, at least not for a long time, than they will have ideas so great than even just disclosing their general contours will cause other companies to steal away their competitive advantage.
My guess is that Safe Superintelligence doesnāt yet have any fantastic ideas that OpenAI, DeepMind, and others donāt also have, and the secrecy just as conveniently covers for that fact as it protects the companyās trade secrets or IP.