Karl—I like that you’ve been able to develop a plausible scenario for a global catastrophic risk that’s based mostly on side-effects of evolutionary self-replication, rather than direct power-seeking.
This seems to be a relatively neglected failure mode for AI. When everybody was concerned about nanotechnology back in the 1990s, the ‘grey goo scenario’ was a major worry (in which self-replicating nanotech turns everything on Earth into copies of itself.) Your story explores a kind of AI version of the grey goo catastrophe.
Karl—I like that you’ve been able to develop a plausible scenario for a global catastrophic risk that’s based mostly on side-effects of evolutionary self-replication, rather than direct power-seeking.
This seems to be a relatively neglected failure mode for AI. When everybody was concerned about nanotechnology back in the 1990s, the ‘grey goo scenario’ was a major worry (in which self-replicating nanotech turns everything on Earth into copies of itself.) Your story explores a kind of AI version of the grey goo catastrophe.