Good point. This is true for those who believe this, but it applies to any form of medicine or life extension, right? Not just brain preservation. So for someone who holds this view, theoretically it might also apply to the antimalarial medication case as well?
The difference is that if you are biologically dead, there is nothing you can do to prevent a malevolant actor to upload your mind. If you are terminally ill and are pessimistic about the future, you can at least choose cremation.
I am not saying that there should be no funding for brain preservation, but personally I am not very enthusiastic since there is the danger that we will not solve the alignment problem.
I’m not sure I understand the scenario you are discussing. In your scenario, it sounds like you’re positing a malevolent non-aligned AI that would forcibly upload and create suffering copies of people. Obviously, this is an almost unfathomably horrific hypothetical scenario which we should all try to prevent if we can. One thing I don’t understand about the scenario you are describing is why this forcible uploading would only happen to people who are legally dead and preserved at the time, but not anyone living at the time.
To answer your question I have to describe some scenarios how a non-aligned AI would act. This is slightly cringe since we do not know what an unaligned AI would do and this sounds very sci-fi like. In the case of something like a robot uprising or a nuclear war started by an AI many people would die under circumstances such that uploading is impossible, but brain banks could be still intact. If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
In my personal, very subjective opinion there is a 10% chance of extinction by AI and a 1% chance for s-risks or something like Roko’s basilisk. You may have different subjective probabilities and even if we agree on the possibilities, it depends very much on your preferred ethical theory what to do.
I’m not disagreeing with you that there is a possibility, however small, of s-risk scenarios. I agree with this point of yours, although I’m thinking of things more like superhuman persuasion, deception, pinning human tribes against one another, etc., rather than nanobots necessarily:
If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
People often bring up this in the context of brain preservation. But it just seems to me that this possibility is mostly a generalized argument against life extension, medicine, pronatalism, etc in general.
On the one hand, I am more concerned with learning AI safety than organizing my own cryoperservation or chemical brain preservation. This is mostly because I think it is more important to avoid the worst possible futures before planning to be alive in the future. On the other hand, I think life extension should receive much more funding (as long as you do not use AI with dangerous capabilities), I am against anti-natalism and I think that brain preservation should be available for those who wish. There is a certain tension between both statements. I do not know how to resolve that tension, but thank you for pointing out the problem with my worldview.
Good point. This is true for those who believe this, but it applies to any form of medicine or life extension, right? Not just brain preservation. So for someone who holds this view, theoretically it might also apply to the antimalarial medication case as well?
The difference is that if you are biologically dead, there is nothing you can do to prevent a malevolant actor to upload your mind. If you are terminally ill and are pessimistic about the future, you can at least choose cremation.
I am not saying that there should be no funding for brain preservation, but personally I am not very enthusiastic since there is the danger that we will not solve the alignment problem.
I’m not sure I understand the scenario you are discussing. In your scenario, it sounds like you’re positing a malevolent non-aligned AI that would forcibly upload and create suffering copies of people. Obviously, this is an almost unfathomably horrific hypothetical scenario which we should all try to prevent if we can. One thing I don’t understand about the scenario you are describing is why this forcible uploading would only happen to people who are legally dead and preserved at the time, but not anyone living at the time.
To answer your question I have to describe some scenarios how a non-aligned AI would act. This is slightly cringe since we do not know what an unaligned AI would do and this sounds very sci-fi like. In the case of something like a robot uprising or a nuclear war started by an AI many people would die under circumstances such that uploading is impossible, but brain banks could be still intact. If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]
In my personal, very subjective opinion there is a 10% chance of extinction by AI and a 1% chance for s-risks or something like Roko’s basilisk. You may have different subjective probabilities and even if we agree on the possibilities, it depends very much on your preferred ethical theory what to do.
I’m not disagreeing with you that there is a possibility, however small, of s-risk scenarios. I agree with this point of yours, although I’m thinking of things more like superhuman persuasion, deception, pinning human tribes against one another, etc., rather than nanobots necessarily:
People often bring up this in the context of brain preservation. But it just seems to me that this possibility is mostly a generalized argument against life extension, medicine, pronatalism, etc in general.
On the one hand, I am more concerned with learning AI safety than organizing my own cryoperservation or chemical brain preservation. This is mostly because I think it is more important to avoid the worst possible futures before planning to be alive in the future. On the other hand, I think life extension should receive much more funding (as long as you do not use AI with dangerous capabilities), I am against anti-natalism and I think that brain preservation should be available for those who wish. There is a certain tension between both statements. I do not know how to resolve that tension, but thank you for pointing out the problem with my worldview.