I’m not sure I understand the scenario you are discussing. In your scenario, it sounds like you’re positing a malevolent non-aligned AI that would forcibly upload and create suffering copies of people. Obviously, this is an almost unfathomably horrific hypothetical scenario which we should all try to prevent if we can. One thing I don’t understand about the scenario you are describing is why this forcible uploading would only happen to people who are legally dead and preserved at the time, but not anyone living at the time.
To answer your question I have to describe some scenarios how a non-aligned AI would act. This is slightly cringe since we do not know what an unaligned AI would do and this sounds very sci-fi like. In the case of something like a robot uprising or a nuclear war started by an AI many people would die under circumstances such that uploading is impossible, but brain banks could be still intact. If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
In my personal, very subjective opinion there is a 10% chance of extinction by AI and a 1% chance for s-risks or something like Roko’s basilisk. You may have different subjective probabilities and even if we agree on the possibilities, it depends very much on your preferred ethical theory what to do.
I’m not disagreeing with you that there is a possibility, however small, of s-risk scenarios. I agree with this point of yours, although I’m thinking of things more like superhuman persuasion, deception, pinning human tribes against one another, etc., rather than nanobots necessarily:
If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
People often bring up this in the context of brain preservation. But it just seems to me that this possibility is mostly a generalized argument against life extension, medicine, pronatalism, etc in general.
On the one hand, I am more concerned with learning AI safety than organizing my own cryoperservation or chemical brain preservation. This is mostly because I think it is more important to avoid the worst possible futures before planning to be alive in the future. On the other hand, I think life extension should receive much more funding (as long as you do not use AI with dangerous capabilities), I am against anti-natalism and I think that brain preservation should be available for those who wish. There is a certain tension between both statements. I do not know how to resolve that tension, but thank you for pointing out the problem with my worldview.
I’m not sure I understand the scenario you are discussing. In your scenario, it sounds like you’re positing a malevolent non-aligned AI that would forcibly upload and create suffering copies of people. Obviously, this is an almost unfathomably horrific hypothetical scenario which we should all try to prevent if we can. One thing I don’t understand about the scenario you are describing is why this forcible uploading would only happen to people who are legally dead and preserved at the time, but not anyone living at the time.
To answer your question I have to describe some scenarios how a non-aligned AI would act. This is slightly cringe since we do not know what an unaligned AI would do and this sounds very sci-fi like. In the case of something like a robot uprising or a nuclear war started by an AI many people would die under circumstances such that uploading is impossible, but brain banks could be still intact. If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
In my personal, very subjective opinion there is a 10% chance of extinction by AI and a 1% chance for s-risks or something like Roko’s basilisk. You may have different subjective probabilities and even if we agree on the possibilities, it depends very much on your preferred ethical theory what to do.
I’m not disagreeing with you that there is a possibility, however small, of s-risk scenarios. I agree with this point of yours, although I’m thinking of things more like superhuman persuasion, deception, pinning human tribes against one another, etc., rather than nanobots necessarily:
People often bring up this in the context of brain preservation. But it just seems to me that this possibility is mostly a generalized argument against life extension, medicine, pronatalism, etc in general.
On the one hand, I am more concerned with learning AI safety than organizing my own cryoperservation or chemical brain preservation. This is mostly because I think it is more important to avoid the worst possible futures before planning to be alive in the future. On the other hand, I think life extension should receive much more funding (as long as you do not use AI with dangerous capabilities), I am against anti-natalism and I think that brain preservation should be available for those who wish. There is a certain tension between both statements. I do not know how to resolve that tension, but thank you for pointing out the problem with my worldview.