At a philosophical level, I don’t really find it very convincing that even a perfect recovery/replica would be righting any wrongs experienced by the subject in the past, but I can’t definitively explain why—only that I don’t think replicas are ‘the same lives’ as the original or really meaningfully connected to them in any moral way. For example, if I cloned you absolutely perfectly now, and then said, I’m going to torture you for the rest of your life, but don’t worry, your clone will be experiencing eqaul and opposite pleasures, would you think this is good (or evens out) for you as the single subject being tortured, and would it correct for the injustice being done to you as a subject experiencing the torture? All that is being done is making a new person and giving them a different experience to the other one.
Thanks for the pushback, it clarified my thinking further.
if I cloned you absolutely perfectly now, and then said, I’m going to torture you for the rest of your life, but don’t worry, your clone will be experiencing eqaul and opposite pleasures, would you think this is good (or evens out)
I think this thought experiment introduces more complexities that the scenario in the post avoids, e.g. having to weigh suffering vs. happiness. In the original scenario the torture/suboptimal life already would have happened to me, and now the question is whether it’s better from a moral sense to have a future filled with tons of happy fulfilled lives vs. one where one of those lives is lived by somebody that is basically me. And my intuition is, that I’d feel much better knowing that what “I” am, my hopes, dreams, basic drives, etc. will be fulfilled at some point in the future despite having been first instantiated in a world where those hopes etc. where tragically crushed.
So my intuition here probably comes more from a preference utilitarian perspective, where I want the preferences of specific minds to be fulfilled, and this would be somewhat possible by having a future close version of yourself with almost identical preferences/hopes/desires/affections etc.
Good discussion. My intuition is that if you have a close enough copy that shares the same memories as you, it would feel like it was you (i.e. be you). So say you resurrected people and made it so that they felt like a continuation of their previous selves. Perhaps if (in their original life) they got cancer and died young, they would instead remember being miraculously cured, or something. Even if there were multiple copies, they would all essentially be you (subjectively feel like you), just branched from the original (i.e share a common history).
If there are no shared memories, then effectively it wouldn’t be much different than standard Open Individualism—i.e. you are already everyone, but just not directly experientially aware of the link. The fulfilling of preferences seems somewhat incomplete unless the original people know about it. Like you’d need the simulator somehow letting them know before they die that they will live again, or something (this is starting to sound religious :)).
Also, perhaps an easier route for all this is cryonics :)
I’m also very sympathetic to a preference utilitarian perspective, much more so than just suffering vs. happiness. But to me the preference satisfaction comes from the realised state of the world actually being as desired, and not from specifically experiencing that satisfaction. For example, people will willingly die in the name of furthering a cause they want to see realised, knowing full well they will not experience it. One would consider it something of a compensation for their sacrifice if their goals are realised after, or especially because of, their death.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
One might imagine that the overall state of the world is more positive because there is this replica that is really ecstatic about their preferences being realised and being able to experience it, but specifically in terms of righting the wrong I don’t think it has added anything. They are not the same subject as the one who experienced the wrong—so it does not correct for their specific experience—and the payout is in any case in the realised state of the world and not in that past subject having to experience it.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
I think where my intuitions diverge is that I expect many people to have a lot of self-directed preferences that I regard as ethically on the same footing as non-self directed preferences: It seems you’re mostly considering states of the world like ensuring the survival and flourishing of their loved ones, or justice happening for crimes against humanity, or an evil government being overthrown and replaced by a democracy. But I’d guess this class of preferences should not be so distinct from people wanting the state of the world in future including themselves being happy, with a loving partner and family, friends and a community that holds him or her in high regard. And that’s why I feel like a past person would feel at least a little redeemed if they knew that in some future time they would see themselves living the fulfilled live that the past selves wished they could’ve enjoyed.
Ah I see, yes that seems to make a meaningful difference regarding the need to have the self experience it then. Although I would still question if having the replica achieves this. If we go to the clone example, if I clone you now with all your thoughts and desires and you remain unsatisfied, but I tell you that your clone is—contemporaneous with your continued existence—living a life in which all your desires are satisfied, would you find that satisfying? For me at least that would not be satisfying or reassuring at all. I don’t see a principled way in which stretching the replication process over time so that you no longer exist when the copy is created suddenly changes this. The preference would seem to be that the person’s subjective experience is different in the ways that they hope for, but all that is being done is creating an additional and alternative subjective experience that is like theirs, which experiences the good things instead.
Yeah, I think it’s a good point that stretching the replication process over time seems kind of arbitrary and might making the existence of the replica and yourself contemporaneous reduces the intuition that it is “you” who gets to live the life you wished for.
At the same time my personal intuitions (which are often different from other reasonable people :D) are actually not reduced much by the thought of a replicated copy of myself living at the same time. E.g. if I now think about a 1:1 copy of mine living a fullfilled life with his wife and children in a “parallel universe”, I feel more deeply happy about this than thinking about the same scenario for friends or strangers.
Ha well, I think you might find a fair few people share your intuition, especially in some strands of EA that intersect with transhumanism.
I don’t personally share the intuition, but I think if I did then it would also make sense to me that I would expect the replica’s satisfaction would be correspondingly reduced to the extent they know some other self that they are identified with is or was not satisfied. But I appreciate at this point we’re just getting to conflicting intuitions!
A clone wouldn’t have the same consciousness , so that’s a bad deal. But for whatever reason, people have a sense of a personal identity across time. I am fully willing to make inter temporal trade offs. It seems more just to make up for past injustices.
Whether or not you could in theory create a replica of a person which has the same consciousness isn’t necessary clear. If you’re entirely a physicalist and believe in computational theory of mind, what reason is there for you not to believe you could recreate a persons consciousness? Just exactly replicate all their brain processes.
‘If you’re entirely a physicalist and believe in computational theory of mind, what reason is there for you not to believe you could recreate a persons consciousness? Just exactly replicate all their brain processes.’ This is confusing 2 different kinds of identity, “qualitative” and “numerical”. :
Qualitative identity=I can have 2 different (qualitatively) identical apples, if one is a perfect duplicate of the other
Numerical identity=X is numerically identical to Y, if they’re the same object, for example ‘the morning star’ and ‘the evening star’ are numerically identical, since these are both old names for the planet Venus.
What physicalism implies is that if you build someone who has all the same physical properties as me then they will be qualitatively identical to me, full-stop, because physicalism just is the view that all properties of things are fixed by their physical properties. But that doesn’t automatically mean they’d be numerically identical to me, any more than if I create a perfect duplicate of an apple, their both the same apple. Common-sense says ‘no they are not the same apple, because I started with only one apple and now I have 2, and if there are 2 apples, they are (numerically) distinct from each other’. You could of course have a theory that ‘same person’ is special, in that any perfect duplicate of me just is me. But I don’t think that is very plausible: build a perfect duplicate of me while I am alive, and it seems like you have two (qualitatively) identical people, not just one person who is somehow in 2 places at once.
I think some people are confused about this because they’ve heard philosophers have “psychological” theories of personal identity, where if the informational contents of your brain get wiped and moved to another new brain, then you are the person with the new brain. But actually, the theories that philosophers take seriously which imply this don’t say that if two people have exactly the same mental properties, they must be the same person. What they say is that if there’s a future person who’s psychological state depends on your current state in the right way, then that future person is you*, and they then combined this with the idea that if info is deliberately transferred from your brain to another brain, this is a connection of the right sort for the person with the new brain to count as you.
*Actually, it’s a little more complicated than that: you need to add a clause saying ‘and no other person at the same point in the future has mental states that depend on yours in the right way’. Can’t have 2 future people who are identical to you but not each other. That’s the key insight behind Derek Parfit’s famous argument that there are situations as selfishly good as survival for you but where you cease to exist: this happens when there are multiple duplicates of you whose mental states are each connected to yours in the right way.
At a philosophical level, I don’t really find it very convincing that even a perfect recovery/replica would be righting any wrongs experienced by the subject in the past, but I can’t definitively explain why—only that I don’t think replicas are ‘the same lives’ as the original or really meaningfully connected to them in any moral way. For example, if I cloned you absolutely perfectly now, and then said, I’m going to torture you for the rest of your life, but don’t worry, your clone will be experiencing eqaul and opposite pleasures, would you think this is good (or evens out) for you as the single subject being tortured, and would it correct for the injustice being done to you as a subject experiencing the torture? All that is being done is making a new person and giving them a different experience to the other one.
Thanks for the pushback, it clarified my thinking further.
I think this thought experiment introduces more complexities that the scenario in the post avoids, e.g. having to weigh suffering vs. happiness. In the original scenario the torture/suboptimal life already would have happened to me, and now the question is whether it’s better from a moral sense to have a future filled with tons of happy fulfilled lives vs. one where one of those lives is lived by somebody that is basically me. And my intuition is, that I’d feel much better knowing that what “I” am, my hopes, dreams, basic drives, etc. will be fulfilled at some point in the future despite having been first instantiated in a world where those hopes etc. where tragically crushed.
So my intuition here probably comes more from a preference utilitarian perspective, where I want the preferences of specific minds to be fulfilled, and this would be somewhat possible by having a future close version of yourself with almost identical preferences/hopes/desires/affections etc.
Good discussion. My intuition is that if you have a close enough copy that shares the same memories as you, it would feel like it was you (i.e. be you). So say you resurrected people and made it so that they felt like a continuation of their previous selves. Perhaps if (in their original life) they got cancer and died young, they would instead remember being miraculously cured, or something. Even if there were multiple copies, they would all essentially be you (subjectively feel like you), just branched from the original (i.e share a common history).
If there are no shared memories, then effectively it wouldn’t be much different than standard Open Individualism—i.e. you are already everyone, but just not directly experientially aware of the link. The fulfilling of preferences seems somewhat incomplete unless the original people know about it. Like you’d need the simulator somehow letting them know before they die that they will live again, or something (this is starting to sound religious :)).
Also, perhaps an easier route for all this is cryonics :)
I’m also very sympathetic to a preference utilitarian perspective, much more so than just suffering vs. happiness. But to me the preference satisfaction comes from the realised state of the world actually being as desired, and not from specifically experiencing that satisfaction. For example, people will willingly die in the name of furthering a cause they want to see realised, knowing full well they will not experience it. One would consider it something of a compensation for their sacrifice if their goals are realised after, or especially because of, their death.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
One might imagine that the overall state of the world is more positive because there is this replica that is really ecstatic about their preferences being realised and being able to experience it, but specifically in terms of righting the wrong I don’t think it has added anything. They are not the same subject as the one who experienced the wrong—so it does not correct for their specific experience—and the payout is in any case in the realised state of the world and not in that past subject having to experience it.
I think where my intuitions diverge is that I expect many people to have a lot of self-directed preferences that I regard as ethically on the same footing as non-self directed preferences: It seems you’re mostly considering states of the world like ensuring the survival and flourishing of their loved ones, or justice happening for crimes against humanity, or an evil government being overthrown and replaced by a democracy. But I’d guess this class of preferences should not be so distinct from people wanting the state of the world in future including themselves being happy, with a loving partner and family, friends and a community that holds him or her in high regard. And that’s why I feel like a past person would feel at least a little redeemed if they knew that in some future time they would see themselves living the fulfilled live that the past selves wished they could’ve enjoyed.
Ah I see, yes that seems to make a meaningful difference regarding the need to have the self experience it then. Although I would still question if having the replica achieves this. If we go to the clone example, if I clone you now with all your thoughts and desires and you remain unsatisfied, but I tell you that your clone is—contemporaneous with your continued existence—living a life in which all your desires are satisfied, would you find that satisfying? For me at least that would not be satisfying or reassuring at all. I don’t see a principled way in which stretching the replication process over time so that you no longer exist when the copy is created suddenly changes this. The preference would seem to be that the person’s subjective experience is different in the ways that they hope for, but all that is being done is creating an additional and alternative subjective experience that is like theirs, which experiences the good things instead.
Yeah, I think it’s a good point that stretching the replication process over time seems kind of arbitrary and might making the existence of the replica and yourself contemporaneous reduces the intuition that it is “you” who gets to live the life you wished for.
At the same time my personal intuitions (which are often different from other reasonable people :D) are actually not reduced much by the thought of a replicated copy of myself living at the same time. E.g. if I now think about a 1:1 copy of mine living a fullfilled life with his wife and children in a “parallel universe”, I feel more deeply happy about this than thinking about the same scenario for friends or strangers.
Ha well, I think you might find a fair few people share your intuition, especially in some strands of EA that intersect with transhumanism.
I don’t personally share the intuition, but I think if I did then it would also make sense to me that I would expect the replica’s satisfaction would be correspondingly reduced to the extent they know some other self that they are identified with is or was not satisfied. But I appreciate at this point we’re just getting to conflicting intuitions!
A clone wouldn’t have the same consciousness , so that’s a bad deal. But for whatever reason, people have a sense of a personal identity across time. I am fully willing to make inter temporal trade offs. It seems more just to make up for past injustices.
Whether or not you could in theory create a replica of a person which has the same consciousness isn’t necessary clear. If you’re entirely a physicalist and believe in computational theory of mind, what reason is there for you not to believe you could recreate a persons consciousness? Just exactly replicate all their brain processes.
‘If you’re entirely a physicalist and believe in computational theory of mind, what reason is there for you not to believe you could recreate a persons consciousness? Just exactly replicate all their brain processes.’ This is confusing 2 different kinds of identity, “qualitative” and “numerical”. :
Qualitative identity=I can have 2 different (qualitatively) identical apples, if one is a perfect duplicate of the other
Numerical identity=X is numerically identical to Y, if they’re the same object, for example ‘the morning star’ and ‘the evening star’ are numerically identical, since these are both old names for the planet Venus.
What physicalism implies is that if you build someone who has all the same physical properties as me then they will be qualitatively identical to me, full-stop, because physicalism just is the view that all properties of things are fixed by their physical properties. But that doesn’t automatically mean they’d be numerically identical to me, any more than if I create a perfect duplicate of an apple, their both the same apple. Common-sense says ‘no they are not the same apple, because I started with only one apple and now I have 2, and if there are 2 apples, they are (numerically) distinct from each other’. You could of course have a theory that ‘same person’ is special, in that any perfect duplicate of me just is me. But I don’t think that is very plausible: build a perfect duplicate of me while I am alive, and it seems like you have two (qualitatively) identical people, not just one person who is somehow in 2 places at once.
I think some people are confused about this because they’ve heard philosophers have “psychological” theories of personal identity, where if the informational contents of your brain get wiped and moved to another new brain, then you are the person with the new brain. But actually, the theories that philosophers take seriously which imply this don’t say that if two people have exactly the same mental properties, they must be the same person. What they say is that if there’s a future person who’s psychological state depends on your current state in the right way, then that future person is you*, and they then combined this with the idea that if info is deliberately transferred from your brain to another brain, this is a connection of the right sort for the person with the new brain to count as you.
*Actually, it’s a little more complicated than that: you need to add a clause saying ‘and no other person at the same point in the future has mental states that depend on yours in the right way’. Can’t have 2 future people who are identical to you but not each other. That’s the key insight behind Derek Parfit’s famous argument that there are situations as selfishly good as survival for you but where you cease to exist: this happens when there are multiple duplicates of you whose mental states are each connected to yours in the right way.