anything smart enough to be existentially dangerous is still a long way away
I don’t think this is really a tenable position any more, post GPT-4 and AutoGPT. See e.g. Connor Leahy explaining that LLMs are basically “general cognition engines” and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid “System 2″ type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). If this isn’t clear now, it will be in a few months once Google DeepMind releases the next version of it’s multimodal (text, images, video, robotics) AI.
Some experts still seem to hold it: i.e. Yann LeCun: https://twitter.com/ylecun/status/1621805604900585472 Whether or not they in fact have good reason to think this, it’s surely evidence that people at DeepMind could be thinking this way too.
I don’t think this is really a tenable position any more, post GPT-4 and AutoGPT. See e.g. Connor Leahy explaining that LLMs are basically “general cognition engines” and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid “System 2″ type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). If this isn’t clear now, it will be in a few months once Google DeepMind releases the next version of it’s multimodal (text, images, video, robotics) AI.
Some experts still seem to hold it: i.e. Yann LeCun: https://twitter.com/ylecun/status/1621805604900585472 Whether or not they in fact have good reason to think this, it’s surely evidence that people at DeepMind could be thinking this way too.
I think multimodal models kind of make his points about text moot. GPT-4 is already text + images (making “LLM” a misnomer).