This is an algorithmic trick without ethical value. The person who experienced suffering still experienced suffering. You can outweigh it by creating lots of good scenarios, but making those scenarios similar to the original one is irrelevant.
It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn’t mean that I completely endorse this “copy-friendly” theory of identity.
I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:
No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony—to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.
There is no reason to prefer that over simply creating people with happy lives. You can always simulate someone who believes that they have just been saved from suffering if that constitutes the best life. The relation to some historical person who suffered is irrelevant and inefficient.
Deterring unfriendly AI is another matter. There are so many possible goal functions that can be used to describe possible unfriendly AIs that a general strategy for deterring them doesn’t make sense. At least not without a lot of theoretical groundwork that is presently lacking.
If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.
In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.
I addressed the problem of the simulation of evil AIs in the “Message to any future AI” text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don’t need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)
The point, presumably, is that people would feel better because of the expectation that things would improve.
Of course, the criticism is that rather than simulating someone who starts in pain and then improves gradually, you could simply simulate someone with high welfare all along. But if you could achieve identity-continuity without welfare-level-continuity this cost wouldn’t apply.
The point, presumably, is that people would feel better because of the expectation that things would improve.
1/1000 people supposedly feels better, but then 999/1000 people will feel slightly worse, because they are given a scenario where they think that things may get worse, when we have the power to give them a guaranteed good scenario instead. It’s just shifting expectations around, trying to create a free lunch.
It also requires that people in bad situations actually believe that someone is going to build an AI that does this. As far as ways of making people feel more optimistic about life go, this is perhaps the most convoluted one that I have seen. Really there are easier ways of doing that: for instance, make them believe that someone is going to build an AI which actually solves their problem.
See my patch to the argument in the comment to Lukas: we can simulate those moments which are not in intense pain, but still are very close to the initial suffering-observer moment, so they could be regarded its continuation.
This is an algorithmic trick without ethical value. The person who experienced suffering still experienced suffering. You can outweigh it by creating lots of good scenarios, but making those scenarios similar to the original one is irrelevant.
It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn’t mean that I completely endorse this “copy-friendly” theory of identity.
Identity is irrelevant if you evaluate total or average welfare through a standard utilitarian model.
I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:
No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony—to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.
There is no reason to prefer that over simply creating people with happy lives. You can always simulate someone who believes that they have just been saved from suffering if that constitutes the best life. The relation to some historical person who suffered is irrelevant and inefficient.
Deterring unfriendly AI is another matter. There are so many possible goal functions that can be used to describe possible unfriendly AIs that a general strategy for deterring them doesn’t make sense. At least not without a lot of theoretical groundwork that is presently lacking.
If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.
In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.
I addressed the problem of the simulation of evil AIs in the “Message to any future AI” text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don’t need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)
The point, presumably, is that people would feel better because of the expectation that things would improve.
Of course, the criticism is that rather than simulating someone who starts in pain and then improves gradually, you could simply simulate someone with high welfare all along. But if you could achieve identity-continuity without welfare-level-continuity this cost wouldn’t apply.
1/1000 people supposedly feels better, but then 999/1000 people will feel slightly worse, because they are given a scenario where they think that things may get worse, when we have the power to give them a guaranteed good scenario instead. It’s just shifting expectations around, trying to create a free lunch.
It also requires that people in bad situations actually believe that someone is going to build an AI that does this. As far as ways of making people feel more optimistic about life go, this is perhaps the most convoluted one that I have seen. Really there are easier ways of doing that: for instance, make them believe that someone is going to build an AI which actually solves their problem.
See my patch to the argument in the comment to Lukas: we can simulate those moments which are not in intense pain, but still are very close to the initial suffering-observer moment, so they could be regarded its continuation.