Vicarious and Numenta are both explicitly trying to build AGI, and neither does any safety/alignment research whatsoever. I don’t think this fact is particularly relevant to OpenAI, but I do think it’s an important fact in its own right, and I’m always looking for excuses to bring it up. :-P
Anyone who wants to talk about Vicarious or Numenta in the context of AGI safety/alignment, please DM or email me. :-)
In the absence of rapid public progress, my default assumption is that “trying to build AGI” is mostly a marketing gimmick. There seem to be several other companies like this, e.g.:
https://generallyintelligent.ai/
But it is possible they’re just making progress in private, or might achieve some kind of unexpected breakthrough. I guess I’m just less clear about how to handle these scenarios. Maybe by tracking talent flows, which is something the AI Safety community has been trying to do for a while.
Vicarious and Numenta are both explicitly trying to build AGI, and neither does any safety/alignment research whatsoever. I don’t think this fact is particularly relevant to OpenAI, but I do think it’s an important fact in its own right, and I’m always looking for excuses to bring it up. :-P
Anyone who wants to talk about Vicarious or Numenta in the context of AGI safety/alignment, please DM or email me. :-)
In the absence of rapid public progress, my default assumption is that “trying to build AGI” is mostly a marketing gimmick. There seem to be several other companies like this, e.g.: https://generallyintelligent.ai/
But it is possible they’re just making progress in private, or might achieve some kind of unexpected breakthrough. I guess I’m just less clear about how to handle these scenarios. Maybe by tracking talent flows, which is something the AI Safety community has been trying to do for a while.