These new systems are not (inherently) agents. So the classical threat scenario of Yudkowsky & Bostrom (the one I focused on in The Precipice) doesn’t directly apply. That’s a big deal.
It does look like people will be able to make powerful agents out of language models. But they don’t have to be in agent form, so it may be possible for first labs to make aligned non-agent AGI to help with safety measures for AI agents or for national or international governance to outlaw advanced AI agents, while still benefiting from advanced non-agent systems.
Unfortunately, the industry is already aiming to develop agent systems. Anthropic’s Claude noe has “computer use” capabilities, and I recall that Demis Hassabis has also stated that agentsbstr the next area to make big capability gains. I expect more bad news in this direction in the next five years.
Unfortunately, the industry is already aiming to develop agent systems. Anthropic’s Claude noe has “computer use” capabilities, and I recall that Demis Hassabis has also stated that agentsbstr the next area to make big capability gains. I expect more bad news in this direction in the next five years.