I also noticed this post. It could be that OpenAI is more safety-conscious than the ML mainstream. That might not be safety-conscious enough. But it seems like something to be mindful of if we’re tempted to criticize them more than we criticize the less-safety-conscious ML mainstream (e.g. does Google Brain have any sort of safety team at all? Last I checked they publish way more papers than OpenAI. Then again, I suppose Google Brain doesn’t brand themselves as trying to discover AGI—but I’m also not sure how correlated a “trying to discover AGI” brand is likely to be with actually discovering AGI?)
Vicarious and Numenta are both explicitly trying to build AGI, and neither does any safety/alignment research whatsoever. I don’t think this fact is particularly relevant to OpenAI, but I do think it’s an important fact in its own right, and I’m always looking for excuses to bring it up. :-P
Anyone who wants to talk about Vicarious or Numenta in the context of AGI safety/alignment, please DM or email me. :-)
In the absence of rapid public progress, my default assumption is that “trying to build AGI” is mostly a marketing gimmick. There seem to be several other companies like this, e.g.:
https://generallyintelligent.ai/
But it is possible they’re just making progress in private, or might achieve some kind of unexpected breakthrough. I guess I’m just less clear about how to handle these scenarios. Maybe by tracking talent flows, which is something the AI Safety community has been trying to do for a while.
I do think we should be worried about DeepMind, though OpenAI has undergone more dramatic changes recently, including restructuring into a for-profit, losing a large chunk of the safety/policy people, taking on new leadership, etc.
This turns out to be at least partially the answer. As I’m told, Jan Leike joined OpenAI earlier this year and does run an alignment team.
I also noticed this post. It could be that OpenAI is more safety-conscious than the ML mainstream. That might not be safety-conscious enough. But it seems like something to be mindful of if we’re tempted to criticize them more than we criticize the less-safety-conscious ML mainstream (e.g. does Google Brain have any sort of safety team at all? Last I checked they publish way more papers than OpenAI. Then again, I suppose Google Brain doesn’t brand themselves as trying to discover AGI—but I’m also not sure how correlated a “trying to discover AGI” brand is likely to be with actually discovering AGI?)
Vicarious and Numenta are both explicitly trying to build AGI, and neither does any safety/alignment research whatsoever. I don’t think this fact is particularly relevant to OpenAI, but I do think it’s an important fact in its own right, and I’m always looking for excuses to bring it up. :-P
Anyone who wants to talk about Vicarious or Numenta in the context of AGI safety/alignment, please DM or email me. :-)
In the absence of rapid public progress, my default assumption is that “trying to build AGI” is mostly a marketing gimmick. There seem to be several other companies like this, e.g.: https://generallyintelligent.ai/
But it is possible they’re just making progress in private, or might achieve some kind of unexpected breakthrough. I guess I’m just less clear about how to handle these scenarios. Maybe by tracking talent flows, which is something the AI Safety community has been trying to do for a while.
Google does claim to be working on “general purpose intelligence” https://www.alignmentforum.org/posts/bEKW5gBawZirJXREb/pathways-google-s-agi
I do think we should be worried about DeepMind, though OpenAI has undergone more dramatic changes recently, including restructuring into a for-profit, losing a large chunk of the safety/policy people, taking on new leadership, etc.