That did come across to me when I watched the interview. For example, in my summary:
Sutskever specifically predicts that another 100x scaling of AI models would make a difference, but would not transform AI capabilities.
He was cagey about his specific ideas on the âsomething importantâ that âwill continue to be missingâ. He said his company is working on it, but he canât disclose details.
I find this secrecy to be a bit lame. I like when companies like DeepMind publish replicable research or, better yet, open source code and datasets. Even if you donât want to go that far, itâs possible to talk about ideas in general terms without giving away the trade secrets that would make them easy to copy.
Most of the startups that have focused primarily on ambitious fundamental AI research â Vicarious and Numenta are the two examples Iâm thinking of â have not ended up successfully productizing any of their research (so far). DeepMindâs done amazing work, but the first AI model they developed with major practical usefulness was AlphaFold, six years after their acquisition by Google and ten years after their founding, and they didnât release a major product until DeepMind merged with Google Brain in 2023 and worked on Gemini. Itâs more likely than not that a research-focused startup like Sutskeverâs company, Safe Superintelligence, will not have any lucrative, productizable ideas, at least not for a long time, than they will have ideas so great than even just disclosing their general contours will cause other companies to steal away their competitive advantage.
My guess is that Safe Superintelligence doesnât yet have any fantastic ideas that OpenAI, DeepMind, and others donât also have, and the secrecy just as conveniently covers for that fact as it protects the companyâs trade secrets or IP.
That did come across to me when I watched the interview. For example, in my summary:
He was cagey about his specific ideas on the âsomething importantâ that âwill continue to be missingâ. He said his company is working on it, but he canât disclose details.
I find this secrecy to be a bit lame. I like when companies like DeepMind publish replicable research or, better yet, open source code and datasets. Even if you donât want to go that far, itâs possible to talk about ideas in general terms without giving away the trade secrets that would make them easy to copy.
Most of the startups that have focused primarily on ambitious fundamental AI research â Vicarious and Numenta are the two examples Iâm thinking of â have not ended up successfully productizing any of their research (so far). DeepMindâs done amazing work, but the first AI model they developed with major practical usefulness was AlphaFold, six years after their acquisition by Google and ten years after their founding, and they didnât release a major product until DeepMind merged with Google Brain in 2023 and worked on Gemini. Itâs more likely than not that a research-focused startup like Sutskeverâs company, Safe Superintelligence, will not have any lucrative, productizable ideas, at least not for a long time, than they will have ideas so great than even just disclosing their general contours will cause other companies to steal away their competitive advantage.
My guess is that Safe Superintelligence doesnât yet have any fantastic ideas that OpenAI, DeepMind, and others donât also have, and the secrecy just as conveniently covers for that fact as it protects the companyâs trade secrets or IP.