If I were suffering intensely, it wouldn’t be comforting to me that there are other people who were just like me at one point but are now very happy – that feels like a completely different person to me. I’d rather there be someone completely happy than someone who had to undergo unnecessary suffering just to be more similar to me. Insofar as I care about personal identity, I care about whether it is a continuation of my brain, not whether it has similar experiences as me.
Also, “saving” people using this method and having “benevolent AIs [...] distribute parts of the task between each other using randomness” seems indistinguishable from randomly torturing people, and that’s very unappealing for me.
This is because you use not-copy-friendly theory of personal identity, which is reasonable but has other consequences.
I patched the second problem in comments above—only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony—to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AI.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.
If I were suffering intensely, it wouldn’t be comforting to me that there are other people who were just like me at one point but are now very happy – that feels like a completely different person to me. I’d rather there be someone completely happy than someone who had to undergo unnecessary suffering just to be more similar to me. Insofar as I care about personal identity, I care about whether it is a continuation of my brain, not whether it has similar experiences as me.
Also, “saving” people using this method and having “benevolent AIs [...] distribute parts of the task between each other using randomness” seems indistinguishable from randomly torturing people, and that’s very unappealing for me.
This is because you use not-copy-friendly theory of personal identity, which is reasonable but has other consequences.
I patched the second problem in comments above—only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony—to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AI.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.