It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn’t mean that I completely endorse this “copy-friendly” theory of identity.
I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:
No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony—to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.
There is no reason to prefer that over simply creating people with happy lives. You can always simulate someone who believes that they have just been saved from suffering if that constitutes the best life. The relation to some historical person who suffered is irrelevant and inefficient.
Deterring unfriendly AI is another matter. There are so many possible goal functions that can be used to describe possible unfriendly AIs that a general strategy for deterring them doesn’t make sense. At least not without a lot of theoretical groundwork that is presently lacking.
If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.
In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.
I addressed the problem of the simulation of evil AIs in the “Message to any future AI” text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don’t need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)
It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn’t mean that I completely endorse this “copy-friendly” theory of identity.
Identity is irrelevant if you evaluate total or average welfare through a standard utilitarian model.
I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:
No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony—to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.
There is no reason to prefer that over simply creating people with happy lives. You can always simulate someone who believes that they have just been saved from suffering if that constitutes the best life. The relation to some historical person who suffered is irrelevant and inefficient.
Deterring unfriendly AI is another matter. There are so many possible goal functions that can be used to describe possible unfriendly AIs that a general strategy for deterring them doesn’t make sense. At least not without a lot of theoretical groundwork that is presently lacking.
If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.
In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.
I addressed the problem of the simulation of evil AIs in the “Message to any future AI” text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don’t need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)