OK, I updated: risk is less straightforward than I thought. While the AIs do call copies of themselves, rLLMs can’t really undergo a runaway replication cascade unless they can call themselves as “daemons” in separate threads (so that the control loop doesn’t have to wait for the output before continuing). And I currently don’t see an obvious profit motive to do so.
I’m not sure if I understand your point correctly. The LLMs wouldn’t have to be replicated, because different copies of the self-replicating agent could access the same LLM in parallel, just like many human users can access ChatGPT at the same time. At the later stage, when the LLM operators try to block access to their LLMs or even take them offline, the agent would have to find a way to replicate at least one LLM just once and run it on sufficiently powerful hardware to use it as (part of) its “brain”.
I’m not sure if I understand your point correctly. The LLMs wouldn’t have to be replicated, because different copies of the self-replicating agent could access the same LLM in parallel, just like many human users can access ChatGPT at the same time. At the later stage, when the LLM operators try to block access to their LLMs or even take them offline, the agent would have to find a way to replicate at least one LLM just once and run it on sufficiently powerful hardware to use it as (part of) its “brain”.