Thanks for the pushback, it clarified my thinking further.
if I cloned you absolutely perfectly now, and then said, I’m going to torture you for the rest of your life, but don’t worry, your clone will be experiencing eqaul and opposite pleasures, would you think this is good (or evens out)
I think this thought experiment introduces more complexities that the scenario in the post avoids, e.g. having to weigh suffering vs. happiness. In the original scenario the torture/suboptimal life already would have happened to me, and now the question is whether it’s better from a moral sense to have a future filled with tons of happy fulfilled lives vs. one where one of those lives is lived by somebody that is basically me. And my intuition is, that I’d feel much better knowing that what “I” am, my hopes, dreams, basic drives, etc. will be fulfilled at some point in the future despite having been first instantiated in a world where those hopes etc. where tragically crushed.
So my intuition here probably comes more from a preference utilitarian perspective, where I want the preferences of specific minds to be fulfilled, and this would be somewhat possible by having a future close version of yourself with almost identical preferences/hopes/desires/affections etc.
Good discussion. My intuition is that if you have a close enough copy that shares the same memories as you, it would feel like it was you (i.e. be you). So say you resurrected people and made it so that they felt like a continuation of their previous selves. Perhaps if (in their original life) they got cancer and died young, they would instead remember being miraculously cured, or something. Even if there were multiple copies, they would all essentially be you (subjectively feel like you), just branched from the original (i.e share a common history).
If there are no shared memories, then effectively it wouldn’t be much different than standard Open Individualism—i.e. you are already everyone, but just not directly experientially aware of the link. The fulfilling of preferences seems somewhat incomplete unless the original people know about it. Like you’d need the simulator somehow letting them know before they die that they will live again, or something (this is starting to sound religious :)).
Also, perhaps an easier route for all this is cryonics :)
I’m also very sympathetic to a preference utilitarian perspective, much more so than just suffering vs. happiness. But to me the preference satisfaction comes from the realised state of the world actually being as desired, and not from specifically experiencing that satisfaction. For example, people will willingly die in the name of furthering a cause they want to see realised, knowing full well they will not experience it. One would consider it something of a compensation for their sacrifice if their goals are realised after, or especially because of, their death.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
One might imagine that the overall state of the world is more positive because there is this replica that is really ecstatic about their preferences being realised and being able to experience it, but specifically in terms of righting the wrong I don’t think it has added anything. They are not the same subject as the one who experienced the wrong—so it does not correct for their specific experience—and the payout is in any case in the realised state of the world and not in that past subject having to experience it.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
I think where my intuitions diverge is that I expect many people to have a lot of self-directed preferences that I regard as ethically on the same footing as non-self directed preferences: It seems you’re mostly considering states of the world like ensuring the survival and flourishing of their loved ones, or justice happening for crimes against humanity, or an evil government being overthrown and replaced by a democracy. But I’d guess this class of preferences should not be so distinct from people wanting the state of the world in future including themselves being happy, with a loving partner and family, friends and a community that holds him or her in high regard. And that’s why I feel like a past person would feel at least a little redeemed if they knew that in some future time they would see themselves living the fulfilled live that the past selves wished they could’ve enjoyed.
Ah I see, yes that seems to make a meaningful difference regarding the need to have the self experience it then. Although I would still question if having the replica achieves this. If we go to the clone example, if I clone you now with all your thoughts and desires and you remain unsatisfied, but I tell you that your clone is—contemporaneous with your continued existence—living a life in which all your desires are satisfied, would you find that satisfying? For me at least that would not be satisfying or reassuring at all. I don’t see a principled way in which stretching the replication process over time so that you no longer exist when the copy is created suddenly changes this. The preference would seem to be that the person’s subjective experience is different in the ways that they hope for, but all that is being done is creating an additional and alternative subjective experience that is like theirs, which experiences the good things instead.
Yeah, I think it’s a good point that stretching the replication process over time seems kind of arbitrary and might making the existence of the replica and yourself contemporaneous reduces the intuition that it is “you” who gets to live the life you wished for.
At the same time my personal intuitions (which are often different from other reasonable people :D) are actually not reduced much by the thought of a replicated copy of myself living at the same time. E.g. if I now think about a 1:1 copy of mine living a fullfilled life with his wife and children in a “parallel universe”, I feel more deeply happy about this than thinking about the same scenario for friends or strangers.
Ha well, I think you might find a fair few people share your intuition, especially in some strands of EA that intersect with transhumanism.
I don’t personally share the intuition, but I think if I did then it would also make sense to me that I would expect the replica’s satisfaction would be correspondingly reduced to the extent they know some other self that they are identified with is or was not satisfied. But I appreciate at this point we’re just getting to conflicting intuitions!
Thanks for the pushback, it clarified my thinking further.
I think this thought experiment introduces more complexities that the scenario in the post avoids, e.g. having to weigh suffering vs. happiness. In the original scenario the torture/suboptimal life already would have happened to me, and now the question is whether it’s better from a moral sense to have a future filled with tons of happy fulfilled lives vs. one where one of those lives is lived by somebody that is basically me. And my intuition is, that I’d feel much better knowing that what “I” am, my hopes, dreams, basic drives, etc. will be fulfilled at some point in the future despite having been first instantiated in a world where those hopes etc. where tragically crushed.
So my intuition here probably comes more from a preference utilitarian perspective, where I want the preferences of specific minds to be fulfilled, and this would be somewhat possible by having a future close version of yourself with almost identical preferences/hopes/desires/affections etc.
Good discussion. My intuition is that if you have a close enough copy that shares the same memories as you, it would feel like it was you (i.e. be you). So say you resurrected people and made it so that they felt like a continuation of their previous selves. Perhaps if (in their original life) they got cancer and died young, they would instead remember being miraculously cured, or something. Even if there were multiple copies, they would all essentially be you (subjectively feel like you), just branched from the original (i.e share a common history).
If there are no shared memories, then effectively it wouldn’t be much different than standard Open Individualism—i.e. you are already everyone, but just not directly experientially aware of the link. The fulfilling of preferences seems somewhat incomplete unless the original people know about it. Like you’d need the simulator somehow letting them know before they die that they will live again, or something (this is starting to sound religious :)).
Also, perhaps an easier route for all this is cryonics :)
I’m also very sympathetic to a preference utilitarian perspective, much more so than just suffering vs. happiness. But to me the preference satisfaction comes from the realised state of the world actually being as desired, and not from specifically experiencing that satisfaction. For example, people will willingly die in the name of furthering a cause they want to see realised, knowing full well they will not experience it. One would consider it something of a compensation for their sacrifice if their goals are realised after, or especially because of, their death.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
One might imagine that the overall state of the world is more positive because there is this replica that is really ecstatic about their preferences being realised and being able to experience it, but specifically in terms of righting the wrong I don’t think it has added anything. They are not the same subject as the one who experienced the wrong—so it does not correct for their specific experience—and the payout is in any case in the realised state of the world and not in that past subject having to experience it.
I think where my intuitions diverge is that I expect many people to have a lot of self-directed preferences that I regard as ethically on the same footing as non-self directed preferences: It seems you’re mostly considering states of the world like ensuring the survival and flourishing of their loved ones, or justice happening for crimes against humanity, or an evil government being overthrown and replaced by a democracy. But I’d guess this class of preferences should not be so distinct from people wanting the state of the world in future including themselves being happy, with a loving partner and family, friends and a community that holds him or her in high regard. And that’s why I feel like a past person would feel at least a little redeemed if they knew that in some future time they would see themselves living the fulfilled live that the past selves wished they could’ve enjoyed.
Ah I see, yes that seems to make a meaningful difference regarding the need to have the self experience it then. Although I would still question if having the replica achieves this. If we go to the clone example, if I clone you now with all your thoughts and desires and you remain unsatisfied, but I tell you that your clone is—contemporaneous with your continued existence—living a life in which all your desires are satisfied, would you find that satisfying? For me at least that would not be satisfying or reassuring at all. I don’t see a principled way in which stretching the replication process over time so that you no longer exist when the copy is created suddenly changes this. The preference would seem to be that the person’s subjective experience is different in the ways that they hope for, but all that is being done is creating an additional and alternative subjective experience that is like theirs, which experiences the good things instead.
Yeah, I think it’s a good point that stretching the replication process over time seems kind of arbitrary and might making the existence of the replica and yourself contemporaneous reduces the intuition that it is “you” who gets to live the life you wished for.
At the same time my personal intuitions (which are often different from other reasonable people :D) are actually not reduced much by the thought of a replicated copy of myself living at the same time. E.g. if I now think about a 1:1 copy of mine living a fullfilled life with his wife and children in a “parallel universe”, I feel more deeply happy about this than thinking about the same scenario for friends or strangers.
Ha well, I think you might find a fair few people share your intuition, especially in some strands of EA that intersect with transhumanism.
I don’t personally share the intuition, but I think if I did then it would also make sense to me that I would expect the replica’s satisfaction would be correspondingly reduced to the extent they know some other self that they are identified with is or was not satisfied. But I appreciate at this point we’re just getting to conflicting intuitions!