Ok, fair. I would say we could still more safely verify properties of the designs (in simulations and isolated environments), even if we couldn’t come up with them ourselves, and the target seems much narrower and could be met by providing much more limited information about the world (basically just physics and chemistry), so the kind of AI involved would not necessarily be agential or risky.
Not the nanosystems are to be aligned but the transformative AI system that builds them.
Ok, fair. I would say we could still more safely verify properties of the designs (in simulations and isolated environments), even if we couldn’t come up with them ourselves, and the target seems much narrower and could be met by providing much more limited information about the world (basically just physics and chemistry), so the kind of AI involved would not necessarily be agential or risky.