No, there is no interesting new method here, it’s using LLM scaffolding to copy some files and run a script. It can only duplicate itself within the machine it has been given access to.
In order for AI to spread like a virus it would have to have some way to access new sources of compute, for which it would need be able to get money or the ability to hack into other servers. Neither of which current LLMs appear to be capable of.
“Neither of which current LLMs appear to be capable of.”
If o1 pro isn’t able to both hack and get money yet, it’s shockingly close. (Instruction tuning for safety makes accessing that capability very difficult.)
AI’s are already getting money with crypto memecoins. Wondering if there might be some kind of unholy mix of AI generated memecoins, crypto ransomware and self-replicating AI viruses unleashed in the near future.
Surely it’s just a matter of time—now that the method has been published—before AI models are spreading like viruses?
No, there is no interesting new method here, it’s using LLM scaffolding to copy some files and run a script. It can only duplicate itself within the machine it has been given access to.
In order for AI to spread like a virus it would have to have some way to access new sources of compute, for which it would need be able to get money or the ability to hack into other servers. Neither of which current LLMs appear to be capable of.
“Neither of which current LLMs appear to be capable of.”
If o1 pro isn’t able to both hack and get money yet, it’s shockingly close. (Instruction tuning for safety makes accessing that capability very difficult.)
AI’s are already getting money with crypto memecoins. Wondering if there might be some kind of unholy mix of AI generated memecoins, crypto ransomware and self-replicating AI viruses unleashed in the near future.
One can hope that the damage is limited and that it serves as an appropriate wake-up call to governments. I guess we’ll see..