I’m not disagreeing with you that there is a possibility, however small, of s-risk scenarios. I agree with this point of yours, although I’m thinking of things more like superhuman persuasion, deception, pinning human tribes against one another, etc., rather than nanobots necessarily:
If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
People often bring up this in the context of brain preservation. But it just seems to me that this possibility is mostly a generalized argument against life extension, medicine, pronatalism, etc in general.
Congrats! One way I’ve been thinking about this recently—if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will “cure death,” then interventions to allow people to join the cohort of people who don’t have to involuntarily die could be remarkably effective from a QALY perspective. As I’ve argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough—perhaps with robotic automation of the procedure—it could also be used for non-human animals, with a similar justification.