Vipassana meditation aims to give meditators experiential knowledge (rather than theoretical/intellectual understanding) of this conception of self. I think that’s what a lot of people get out of psychedelics as well.
“It is an old philosophical idea that if the future self is literally different from the current self, one should be less concerned with the death of the future self (Parfit, 1984). This paper examines the relation between attitudes about death and the self among Hindus, Westerners, and three Buddhist populations (Lay Tibetan, Lay Bhutanese, and monastic Tibetans). Compared with other groups, monastic Tibetans gave particularly strong denials of the continuity of self, across several measures. We predicted that the denial of self would be associated with a lower fear of death and greater generosity toward others. To our surprise, we found the opposite. Monastic Tibetan Buddhists showed significantly greater fear of death than any other group. The monastics were also less generous than any other group about the prospect of giving up a slightly longer life in order to extend the life of another.”
One interesting note: “None of the participants we studied were long-term meditators (Tsongkhapa, 1991), and one important question for future research will be whether highly experienced practitioners of meditation would in fact show reduced fear of self-annihilation.” I don’t know if they ever did that future research.
I am happy to see you think deeply about questions of personal identity. I’ve been thinking about the same for many years (e.g. see “Ontological Qualia: The Future of Personal Identity”), and I think that addressing such questions is critical for any consistent theory of consciousness and ethics.
I broadly agree with your view, but here are some things that stand out as worth pointing out:
First, I prefer Daniel Kolak’s factorization of “views of personal identity”. Namely, Closed Individualism (common sense—we are each a “timeline of experience”), Empty Individualism (we are all only individual moments of experience, perhaps most similar to Parfit’s reductionist view as well as yours), and Open Individualism (we are all the same subject of experience).
I think that if Open Individualism is true a lot of ethics could be drastically simplified: caring about all sentient beings is not only kind, but in fact rational. While I think that Empty Individualism is a really strong candidate, I don’t discard Open Individualism. If you do assume that you are the same subject of experience over time (which I know you discard, but many don’t), I think it follows that Open Individualism is the only way to reconcile that with the fact that each moment of experience generated by your brain is different. In other words, if there is no identity carrier we can point to that connects every moment of experience generated by e.g. my brain, then we might infer that the very source of identity is the fact of consciousness per se. Just something to think about.
The other key thing I’d highlight is that you don’t seem to pay much attention to the mystery of why each snapshot of your brain is unified. Parfit also seems have some sort of neglect around this puzzle, for I don’t see it addressed anywhere in his writings despite its central importance to the problem of personal identity.
Synchrony is not a good criteria: there is no universal frame of reference. Plus, even if we could use synchrony as an approximate “unifier” of physical states, we then further have the problem that we would need a natural ground truth boundary to arise that would make your brain generate a moment of experience that is numerically distinct from those generated by other brains at the same time.
I do think that there is in fact a way to solve this. To do so, rather than thinking in terms of “binding” (i.e. why do these two atoms contribute to the same experience but not these two atoms?), we should think in terms of “boundaries” (i.e. what makes this region of reality have a natural boundary that separates it from the rest?). In particular, my solution uses topological segmentation, and IMO solves all of the classic problems. It results in a strong case for Empty Individualism, since topological boundaries in the fields of physics would be objective, causally significant, and frame-invariant (all highly desirable properties for the mechanism of individuation so that e.g. natural selection would have a way of recruiting moments of experience for computational purposes). Additionally, the topological pockets that define individual moments of experience would be spatiotemporal in nature. We don’t need to worry about infinitesimal partitions and a lack of objective frames of reference for simultaneity because the topological pockets have definite spatial and temporal depth. There would, in fact, be a definite and objective answer to “how many experiences are there in this volume of spacetime?” and similar questions.
What would be the “Continuous Replacement” take on cryonics?
For this question, assume that cryonics works (revival succeeds) and is costless. From a personal identity standpoint, is cryonics any different from a nap? Would you be interested in cryonics only to the extent that your projects and relationships were still around? i.e. interested only if your loved ones were also preserved? Less interested the more time will pass before revival? Would projects very long-term projects like “learn how the world works” or “protect humanity” or “see how this all turns out” provide enough justification?
And out of curiosity: Are you signed up for cryonics or interested in signing up?
[O]ne way of characterizing Parfit’s reductionism would be as a kind of illusionism or anti-realism about personal identity: you could say that we don’t really persist through time at all—we can just talk as though we do, for convenience.
Here’s a crucial question: is it rational to anticipate experiences that will be felt by some “future self” to whom you are strongly R-related? Or does anticipation implicitly presuppose a non-reductionist view of identity? Parfit (1984, 312) does not commit himself either way, suggesting that it “seems defensible both to claim and to deny that Relation R gives us reason for special concern.” Of course, your “future selves” (or R-related continuants) are as closely-related to you as can be, so if we have reason to be partial towards anyone, we presumably have reason to partial towards them. But it would still seem a significant loss if we could no longer think of our future selves as ourselves: if they became mere relatives, however close.
I don’t think such a bleak view is forced on us, however. The distinction between philosophical reduction and elimination is notoriously thorny, and analogous questions arise all over the philosophical map. Consciousness, normativity, and free will are three examples for which it is comparably contentious whether reduction amounts to elimination. …
I find it tempting to give different answers in different cases. Consciousness and normativity strike me as sui generis phenomena, missing from any account that countenances only things constituted by atoms. For free will and personal identity, by contrast, I’m inclined to think that the “non-reductive” views don’t even make sense (the idea of ultimate sourcehood, or originally choosing the very basis on which you will make all choices—including that first one!--is literally incoherent). Reductive accounts of these latter phenomena can fill their theoretical roles satisfactorily, in my view.
Other readers may carve up the cases differently. However you do it, my suggestion would be that reductionists can more easily resist eliminativist pressures if they think there is no coherent possibility there to be eliminated. If ultimate sourcehood makes no sense, it would seem unreasonable to treat it as a requirement for anything else, including moral desert.^[To avoid amounting to a merely verbal dispute, I take it that reductionists and eliminativists must disagree about whether some putative reduction base suffices to fill an important theoretical role associated with the original concept.] So we might comfortably accept a compatibilist account as sufficing to make one responsible in the strongest sense, as there simply is nothing more that could be required. Perhaps a similar thing could be said of personal identity. If we think that “Further Fact” views are not merely theoretically extravagant, but outright impossible, it might be easier to regard relation R as sufficient to justify anticipation. What more could be required, after all?
This reasoning is not decisive. Eliminativists could insist that anticipation is *essentially* irrational, presupposing something that could not possibly be. Or they could insist that the Further Fact view is not incoherent, but merely contingently false. Even so, their side too seems to lack decisive arguments. As is so often the case in philosophy, it is up to us to judge what strikes us as the most plausible position, all things considered.
The non-eliminative, reductionist view is, at least, much less drastically revisionary. (If our future selves are better regarded as entirely new people, there would seem no basis for distinguishing killing from failing to bring into existence.You would have to reconceive of guns as contraceptive agents. Nobody survives the present moment anyway, on this view, so the only effect of lethally shooting someone would be to prevent a new, qualitatively similar person from getting to exist in the next moment. Not so bad!) Though even if Parfit’s reductionism can vindicate ordinary anticipation and self-concern, it certainly calls for some revisions to our normative thought....
“Failing to bring in existence” seems an odd way of putting it. I would rephrase as “preventing from coming into existence,” and I think that makes a big difference.
E.g., choosing not to have a child (or choosing not to help someone else have one) is not a crime, but any action that deliberately caused an unwanted miscarriage would be.
Beyond that, I think there is plenty of room (if one wants) to define the relationship between past and future selves as “something special”—such that it is a special kind of tragedy when someone loses their opportunity to have future selves, even exactly on par with how tragic we normally think of murder as being—without giving up the benefits of the view I outlined.
I think it is tragic for someone’s life projects and relationships to be forcibly cut off—even when we imagine this as “cut off via the prevention of their future selves coming into existence to continue these projects and relationships”—in a way that “a life not coming into existence” isn’t. (I am pretty lukewarm on the total view; people who are more into that view might just say these are equally tragic.) In addition to how tragic it is, it seems like a quite different situation w/r/t whether blame and punishment are called for.
Continuity of consciousness may be a notion that’s more significant than commonly imagined. Psychologist William James presented continuity in memorable form, in his “Principles of Psychology”. 132 years later, his stream of thought, felt time-gaps, and unfelt time-gaps all remain active terms in the literature. Yet the greater concept—subjective continuity—seems not to be bounded by James’ familiar text. The concept seems applicable even at the extremities of life; no accepted line of reasoning renders it inapplicable.
Continuity reasoning can be structured around the natural case; i.e., the natural conditions and transitions found at extremities. No fictive elements are necessary in the reasoning: no teleporters, duplicates, digital copies, or re-creations are required. In fact, sci-fi can cripple reasoning just because there’s nothing to understand in the fictions, nothing functional inside the verbal “black box”.
For my part, I’ve made do without such black box fictions; I reasoned without them. Judging from correspondence post-publication, this was the right call.
-
Aristotle said, “All men by nature desire to know.” This was in fact the very first sentence of Aristotle’s “Metaphysics”. What to make of the black box, then?
There’s nothing to know about the word, “teleporter”, for example. One can imagine things, of course; but these imaginings can’t be solidified. A writer can say, “Let’s assume the teleportation black box works this way,” but he says this without authority. The reader can reply, “No, assume it works this entirely different way,” and overwrite the author’s analysis, freely. There’s no end to that fictive back-and-forth; it goes on and on.
Common facts receive comparatively little analysis.
So, was Aristotle right or wrong? Where the word “metaphysics” pertains, do all men desire to know, or not?
I’ve heard this view referred to as a time-slice view of personal identity before.
Personal identity is tied to ordinary questions about the identity and persistence of ordinary objects.
So, you should probably have the same set of persistence conditions (time-slice / constant replacement) for cups, computers, organisms, atoms etc.
If that’s true, then “personality, relationships, and ongoing projects” are also only things that exist at a time-slices. Plausibly, they don’t exist at all since each necessarily exists through time. Either way, there’s no sense in which they can be shared with future selves.
I think this kind of issue is better solved by the “reductionist” understanding of Parfit’s views than the “eliminativist” / “illusionist” version. There’s no illusion of selfhood or constant replacement, just degrees of similarity that compose our idea of a self.
I’m not following why “[I] should probably have the same set of persistence conditions (time-slice / constant replacement) for cups, computers, organisms, atoms etc.” I don’t have those persistence conditions for myself, in every possible sense—only in one particular important sense I pointed at in the post.
I think there are coherent uses of the words “Holden Karnofsky” and the singular tense; you can think of them as pointing at a “set of selves” that has something important in common and has properties of its own as a set. What I’m rejecting is the idea that there is some “continuous consciousness” such that I should fear death when it’s “interrupted,” but not when it isn’t. By a similar token, I think there are plenty of reasonable senses in which “my computer” is a single thing, and other senses in which my computer one day is different from my computer the next day. And same goes for my projects and relationships. In all of these cases, I could be upset if the future of such a thing is cut off entirely, but not if its physical instantiation is replaced with a functional duplicate.
If you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It’d be chill.
...
If that’s right, “constant replacement” could join a number of other ideas that feel so radically alien (for many) that they must be “impossible to live with,” but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing questions and situations better.)
These contradict each other. Let’s say, like you imagined in an earlier post, that one day I’ll be able to become a digital person by destroying my physical body in a futuristic brain-scanning process. It’s pretty obvious that the connected conscious experience I’ve (I hope!) experienced my whole life, would, at that transition, come to an end. Whether or not it counts as me dying, and whether this new person ‘is’ me, are to some extent just semantics. But your and Parfit’s position seems to define away the basic idea of personal identity just to solve its problems. My lifelong connected conscious awareness would undeniably cease to exist; the awareness that was me will enter the inky nothingness. The fact that my clone is walking and talking is completely orthogonal to this basic reality.
So if I tried to live with this idea “for a full week”, except at the end of the week I know I’d be shot and replaced, I’d be freaking out, and I think you would be too. Any satisfactory theory of personal identity has to avoid equating death with age-related change. I should read Reasons and Persons, but none of the paradoxes you link to undermine this ‘connected consciousness’ idea of personal identity (which differs from what Bernard Williams—and maybe Parfit?--would call psychological continuity). As I understand it, psychological continuity allows for any given awareness to end permanently as long as it’s somewhere replaced, but what I’m naively calling ‘connected consciousness’ doesn’t allow this.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don’t want to sleep forever!
Both parts you quoted are saying that the notion of personal identity I’m describing is (or at least can be) “fine to live with.” You might disagree with this, but I’m not following where the contradiction is between the two.
So if I tried to live with this idea “for a full week”, except at the end of the week I know I’d be shot and replaced, I’d be freaking out, and I think you would be too.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here—I don’t think anyone wants someone they care about to experience getting shot.)
It’s pretty obvious that the connected conscious experience I’ve (I hope!) experienced my whole life, would, at that transition, come to an end.
I don’t find it obvious that there’s something meaningful or important about the “connected conscious experience.” If I imagine a future person with my personality and memories, it’s not clear to me that this person lacks anything that “Holden a moment from now” has.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don’t want to sleep forever!
I don’t think death is like sleeping forever, I think it’s like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
They contradict each other in the sense that your full theory, since it includes the particular consequence that vaporization is chill, is I think not something anyone but a small minority would be fine to live with. Quantum mechanics and atheism impose no such demands. It’s not too strong a claim to call this idea fine to live with when you’re just going about your daily life, ignoring the vaporization part. “Fine to live with” has to include every consequence, not just the ones that are indeed fine to live with. I interpreted the second quote as arguing that not just you but the general public could get used to this theory, in the same way they got used to quantum mechanics, because it doesn’t really affect their day-to-day. This is why I brought up your brain-scan hypothetical; here, the vaporization-is-chill consequence clearly affects their daily lives by offering a potentially life-or-death scenario.
I don’t think death is like sleeping forever, I think it’s like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
Let’s say I die. A week later, a new medical procedure is able to revive me. What is the subjective conscious experience of the physical brain during this week? There is none—exactly like during a dreamless sleep. Of course death isn’t actually like sleeping forever; what’s relevant is that the conscious experience associated with the dead brain atom-pile matches that of the alive, sleeping brain, and also that of a rock.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here—I don’t think anyone wants someone they care about to experience getting shot.)
It’s not the gunshot that matters here. If at the end of this week I knew I’d painlessly, peacefully pass away, only to be reassembled immediately nearby with my family none the wiser, I would be freaking out just as much as in the gunshot scenario. The shorter replacemet timescale (a second instead of a week) is the real distraction; it brings in some weird and mostly irrelevant intuitions, even though they’re functionally equivalent theories. Here’s what I think would happen in the every-second scenario, assuming that I knew your theory was correct: I would quickly realize (albeit over the course of many separate lives and with the thoughts of fundamentally different people) that each successive Martin dies immediately, and that in my one-second wake are thousands of former Martins sleeping dreamlessly. This may eventually become fine to live with only to the extent that the person living it doesn’t actually believe it—even if they believe they believe it. If I stayed true to my convictions and remained mentally alright, I’d probably spend most of my time staring at a picture of my family or something. This is why your call to try living with this idea for a week rings hollow to me. It’s like a deep-down atheist trying to believe in God for a week; the emotional reaction can’t be faked, even if you genuinely believe you believe in God.
I don’t find it obvious that there’s something meaningful or important about the “connected conscious experience.” If I imagine a future person with my personality and memories, it’s not clear to me that this person lacks anything that “Holden a moment from now” has.
I agree, this future person lacks nothing—from future person’s perspective. From the perspective of about-to-be vaporized present person, who has the strongest claim to their own identity, future person lack any meaningful connection to present person beyond the superficial, as present person’s brain’s conscious experience will soon be permanently nothing, a state that future person’s brain doesn’t share. Through my normal life, even if all my brain’s atoms eventually get replaced, it seems there is this ‘connected consciousness’ preserving one particular personal identity, rather than a new but otherwise identical one replacing it wholesale like in the teleporter hypothetical.
If I died, was medically revived a week later, and found a newly constructed Martin doing his thing, I would be pretty annoyed, and I think we’d both realize, given full mutual knowledge of our respective origins, that Martin’s personal identity belongs to me and not him.
I don’t intend these vague outlines to be an actual competing conception of personal identity, I have no idea what the real answer is. My core argument is that any theory that renders death-and-replacement functionally equivalent to normal life is unsatisfactory. You did inspire me to check out Reasons and Persons from the library; I hope I’m proven wrong by some thought experiment, and also that I’m not about to die.
This (often framed as being about the hard problem of consciousness) has long been a topic of argument in the rationalsphere. What I’ve observed is that some people have a strong intuition that they have a particular continuous subjective experience that constitutes what they think of as being “them”, and other people don’t. I don’t think this is because the people in the former group haven’t thought about it. As far as I can tell, very little progress has been made by either camp of converting the other to their preferred viewpoint, because the intuitions remain even after the arguments have been made.
Plausible Moral Rule (PMR): People cannot be morally blameworthy for actions that occurred before they existed.
By the PMR, for instance, HT cannot be blameworthy for a murder committed by Ted Bundy.
Now suppose that HT−1 committed murder on national television.
According to the view of personhood laid out in this post, plus the PMR, it seems like HT is not blameworthy for the murder committed by HT−1.
That seems whacky.
I think that seems whacky for precisely the reason that HT and HT−1 are the same person.
(Quick note: HT seems blameworthy for HT−1’s murder in a way that’s fundamentally different than the way we might say Holden’s parents are blameworthy, even if HT−1 is a minor.)
The reason I don’t agree that this is an issue is that I don’t accept the “plausible moral principle” (I alluded to this briefly in footnote 3 of the piece).
I titled the piece “what counts as death?” because it is focused on personal identity for that purpose. We need not accept “HT is not responsible for HT-1′s actions” in order to accept “HT-1 cares about HT analogously to a close relation, with continuity of experience being unimportant here” or ” HT-1 and HT do not have the kind of special relationship that powers a lot of fears about teleportation being death, and other paradoxes.”
Admittedly, part of the reason I feel OK preserving the normal “responsibility” concept while scrapping the normal “death” concept is that I’m a pragmatist about responsibility: to me, “HT is responsible for HT-1′s actions” means something like “Society should treat HT as responsible for HT-1′s actions; this will get good results.” My position would be a more awkward fit for someone who wanted to think of responsibility as something more fundamental, with a deep moral significance.
re: The Pragmatic View of Blameworthiness/Responsibility
I’m compelled against your “pragmatic” view of moral blame by something like Moore’s open-question argument. It seems like we could first decide whether or not someone is blameworthy and then ask a further, separate question about whether they should be punished. For instance, imagine that Jack was involved in a car accident that resulted in Jill’s death. Eachof the following questions seems independently sensible to me: (a) Is Jack morally responsible (i.e., blameworthy) for Jill’s death? (b) Assuming yes, is it morally right to punish Jack? (Set aside legal considerations for our purposes.)
If the pragmatic view about blameworthiness is correct, asking this second question (b) is as incoherent, vacuous, or nonsensical as saying, “I know there’s water in this glass, but is it H2O that’s in there?” But if determining that (a) Jack is blameworthy for Jill’s death still leaves open (b) the question of whether or not to punish Jack, then blameworthiness and punishment-worthiness are not identical (cf., the pragmatic view).[1]
re: Focus of the Piece was Death, not Moral Blame
I understood that the purpose of your post was to consider the implications of a certain view about personal identity continuity (PIC) for our conception of death. But I was trying to show that this particular view of PIC was incompatible with a commonsense view about moral blame. If they are in fact incompatible, and if the commonsense view about moral blame is right, then we have reason to reject this view of PIC (then don’t need to ask what its implications are for our notions of death).
So is that view of moral blame wrong?
It seems prima facie correct to me that Jack cannot be blameworthy for an action that occurred before Jack existed.
But it seems like you reject this idea. I’ll think harder about whether or not that view of blameworthiness is correct or not. For now:
I see how HT−1 can be (causally, morally) responsible for something that HT does, but I don’t see how HT can be responsible for something HT−1 does unless HT and HT−1 are the same person. For HT to be responsible for something HT−1 does, assuming they’re 2 different people, it seems like you’d have to have a concept of responsibility that is fully independent of causality (assuming no backwards-causation). I’m curious what view that would be.
As an aside, your Footnote 3 seems like a reason HT−1 might have for caring about the interests and wellbeing of HT, but it doesn’t seem like a reason why HT is in fact responsible for that other dude, HT−1 (if they’re 2 different people).
Thanks for your thoughts!
P.S. I’m new to all of this, so if anything about my comments is counter-normative, I’d be thrilled for some feedback!
We can further think about the separability of these two questions by asking (b) irrespective of (a). For instance, there might be pragmatic reasons to punish a car passenger for drinking alcohol even if there’s nothing blameworthy about a passenger drinking alcohol per se.
In response to the paragraph starting “I see how …” (which I can’t copy-paste easily due to the subscripts):
I think there are good pragmatic arguments for taking actions that effectively hold Ht responsible for the actions of Ht-1. For example, if Ht-1 committed premeditated murder, this gives some argument that Ht is more likely to harm others than the average person, and should be accordingly restricted for their benefit. And it’s possible that the general practice of punishing Ht for Ht-1′s actions would generally deter crime, while not creating other perverse effects (more effectively than punishing someone else for Ht-1′s actions).
In my view, that’s enough—I generally don’t buy into the idea that there is something fundamental to the idea of “what people deserve” beyond something like “how people should be treated as part of the functioning of a healthy society.”
But if I didn’t hold this view, I could still just insist on splitting the idea of “the same person” into two different things: it seems coherent to say that Ht-1 and Ht are the same person in one sense and different people in another sense. My main claim is that “myself 1 second from now” and “myself now” are different people in the same sense that “a copy of myself created on another planet” and “myself” are different people; we could simultaneously say that both pairs can be called the “same person” in a different sense, one used for responsibility. (And indeed, it does seem reasonable to me that a copy would be held responsible for actions that the original took before “forking.”)
Comments for What counts as death? will go here.
Vipassana meditation aims to give meditators experiential knowledge (rather than theoretical/intellectual understanding) of this conception of self. I think that’s what a lot of people get out of psychedelics as well.
I thought this paper was really interesting: https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.12590
The abstract:
“It is an old philosophical idea that if the future self is literally different from the current self, one should be less concerned with the death of the future self (Parfit, 1984). This paper examines the relation between attitudes about death and the self among Hindus, Westerners, and three Buddhist populations (Lay Tibetan, Lay Bhutanese, and monastic Tibetans). Compared with other groups, monastic Tibetans gave particularly strong denials of the continuity of self, across several measures. We predicted that the denial of self would be associated with a lower fear of death and greater generosity toward others. To our surprise, we found the opposite. Monastic Tibetan Buddhists showed significantly greater fear of death than any other group. The monastics were also less generous than any other group about the prospect of giving up a slightly longer life in order to extend the life of another.”
One interesting note: “None of the participants we studied were long-term meditators (Tsongkhapa, 1991), and one important question for future research will be whether highly experienced practitioners of meditation would in fact show reduced fear of self-annihilation.” I don’t know if they ever did that future research.
Hi Holden!
I am happy to see you think deeply about questions of personal identity. I’ve been thinking about the same for many years (e.g. see “Ontological Qualia: The Future of Personal Identity”), and I think that addressing such questions is critical for any consistent theory of consciousness and ethics.
I broadly agree with your view, but here are some things that stand out as worth pointing out:
First, I prefer Daniel Kolak’s factorization of “views of personal identity”. Namely, Closed Individualism (common sense—we are each a “timeline of experience”), Empty Individualism (we are all only individual moments of experience, perhaps most similar to Parfit’s reductionist view as well as yours), and Open Individualism (we are all the same subject of experience).
I think that if Open Individualism is true a lot of ethics could be drastically simplified: caring about all sentient beings is not only kind, but in fact rational. While I think that Empty Individualism is a really strong candidate, I don’t discard Open Individualism. If you do assume that you are the same subject of experience over time (which I know you discard, but many don’t), I think it follows that Open Individualism is the only way to reconcile that with the fact that each moment of experience generated by your brain is different. In other words, if there is no identity carrier we can point to that connects every moment of experience generated by e.g. my brain, then we might infer that the very source of identity is the fact of consciousness per se. Just something to think about.
The other key thing I’d highlight is that you don’t seem to pay much attention to the mystery of why each snapshot of your brain is unified. Parfit also seems have some sort of neglect around this puzzle, for I don’t see it addressed anywhere in his writings despite its central importance to the problem of personal identity.
Synchrony is not a good criteria: there is no universal frame of reference. Plus, even if we could use synchrony as an approximate “unifier” of physical states, we then further have the problem that we would need a natural ground truth boundary to arise that would make your brain generate a moment of experience that is numerically distinct from those generated by other brains at the same time.
I do think that there is in fact a way to solve this. To do so, rather than thinking in terms of “binding” (i.e. why do these two atoms contribute to the same experience but not these two atoms?), we should think in terms of “boundaries” (i.e. what makes this region of reality have a natural boundary that separates it from the rest?). In particular, my solution uses topological segmentation, and IMO solves all of the classic problems. It results in a strong case for Empty Individualism, since topological boundaries in the fields of physics would be objective, causally significant, and frame-invariant (all highly desirable properties for the mechanism of individuation so that e.g. natural selection would have a way of recruiting moments of experience for computational purposes). Additionally, the topological pockets that define individual moments of experience would be spatiotemporal in nature. We don’t need to worry about infinitesimal partitions and a lack of objective frames of reference for simultaneity because the topological pockets have definite spatial and temporal depth. There would, in fact, be a definite and objective answer to “how many experiences are there in this volume of spacetime?” and similar questions.
If interested, I recommend watching my video about my solution to the binding problem here: Solving the Phenomenal Binding Problem: Topological Segmentation as the Correct Explanation Space. Even just reading the video description goes a long way :-) Let me know your thoughts if you get to it.
All the best!
What would be the “Continuous Replacement” take on cryonics?
For this question, assume that cryonics works (revival succeeds) and is costless. From a personal identity standpoint, is cryonics any different from a nap? Would you be interested in cryonics only to the extent that your projects and relationships were still around? i.e. interested only if your loved ones were also preserved? Less interested the more time will pass before revival? Would projects very long-term projects like “learn how the world works” or “protect humanity” or “see how this all turns out” provide enough justification?
And out of curiosity: Are you signed up for cryonics or interested in signing up?
Cf. section 6.3 of Parfit’s Ethics:
“Failing to bring in existence” seems an odd way of putting it. I would rephrase as “preventing from coming into existence,” and I think that makes a big difference.
E.g., choosing not to have a child (or choosing not to help someone else have one) is not a crime, but any action that deliberately caused an unwanted miscarriage would be.
Beyond that, I think there is plenty of room (if one wants) to define the relationship between past and future selves as “something special”—such that it is a special kind of tragedy when someone loses their opportunity to have future selves, even exactly on par with how tragic we normally think of murder as being—without giving up the benefits of the view I outlined.
I think it is tragic for someone’s life projects and relationships to be forcibly cut off—even when we imagine this as “cut off via the prevention of their future selves coming into existence to continue these projects and relationships”—in a way that “a life not coming into existence” isn’t. (I am pretty lukewarm on the total view; people who are more into that view might just say these are equally tragic.) In addition to how tragic it is, it seems like a quite different situation w/r/t whether blame and punishment are called for.
Philosophy without the black box
Continuity of consciousness may be a notion that’s more significant than commonly imagined. Psychologist William James presented continuity in memorable form, in his “Principles of Psychology”. 132 years later, his stream of thought, felt time-gaps, and unfelt time-gaps all remain active terms in the literature. Yet the greater concept—subjective continuity—seems not to be bounded by James’ familiar text. The concept seems applicable even at the extremities of life; no accepted line of reasoning renders it inapplicable.
Continuity reasoning can be structured around the natural case; i.e., the natural conditions and transitions found at extremities. No fictive elements are necessary in the reasoning: no teleporters, duplicates, digital copies, or re-creations are required. In fact, sci-fi can cripple reasoning just because there’s nothing to understand in the fictions, nothing functional inside the verbal “black box”.
For my part, I’ve made do without such black box fictions; I reasoned without them. Judging from correspondence post-publication, this was the right call.
-
Aristotle said, “All men by nature desire to know.” This was in fact the very first sentence of Aristotle’s “Metaphysics”. What to make of the black box, then?
There’s nothing to know about the word, “teleporter”, for example. One can imagine things, of course; but these imaginings can’t be solidified. A writer can say, “Let’s assume the teleportation black box works this way,” but he says this without authority. The reader can reply, “No, assume it works this entirely different way,” and overwrite the author’s analysis, freely. There’s no end to that fictive back-and-forth; it goes on and on.
Common facts receive comparatively little analysis.
So, was Aristotle right or wrong? Where the word “metaphysics” pertains, do all men desire to know, or not?
-
For consideration, the old essay: Metaphysics by Default
Chapters 1-4 are historical.
Chapters 5-7 give mathematical, computational, and neurological background, with a first inference.
Chapter 8 gives philosophical background, with a second inference.
Chapter 9 presents James’ text and applies inferences toward reasoning for boundless continuity. Some novelties follow.
ws
FYI Sam Harris has a good talk through of the death argument in #263
I’ve heard this view referred to as a time-slice view of personal identity before.
Personal identity is tied to ordinary questions about the identity and persistence of ordinary objects.
So, you should probably have the same set of persistence conditions (time-slice / constant replacement) for cups, computers, organisms, atoms etc.
If that’s true, then “personality, relationships, and ongoing projects” are also only things that exist at a time-slices. Plausibly, they don’t exist at all since each necessarily exists through time. Either way, there’s no sense in which they can be shared with future selves.
I think this kind of issue is better solved by the “reductionist” understanding of Parfit’s views than the “eliminativist” / “illusionist” version. There’s no illusion of selfhood or constant replacement, just degrees of similarity that compose our idea of a self.
I’m not following why “[I] should probably have the same set of persistence conditions (time-slice / constant replacement) for cups, computers, organisms, atoms etc.” I don’t have those persistence conditions for myself, in every possible sense—only in one particular important sense I pointed at in the post.
I think there are coherent uses of the words “Holden Karnofsky” and the singular tense; you can think of them as pointing at a “set of selves” that has something important in common and has properties of its own as a set. What I’m rejecting is the idea that there is some “continuous consciousness” such that I should fear death when it’s “interrupted,” but not when it isn’t. By a similar token, I think there are plenty of reasonable senses in which “my computer” is a single thing, and other senses in which my computer one day is different from my computer the next day. And same goes for my projects and relationships. In all of these cases, I could be upset if the future of such a thing is cut off entirely, but not if its physical instantiation is replaced with a functional duplicate.
These contradict each other. Let’s say, like you imagined in an earlier post, that one day I’ll be able to become a digital person by destroying my physical body in a futuristic brain-scanning process. It’s pretty obvious that the connected conscious experience I’ve (I hope!) experienced my whole life, would, at that transition, come to an end. Whether or not it counts as me dying, and whether this new person ‘is’ me, are to some extent just semantics. But your and Parfit’s position seems to define away the basic idea of personal identity just to solve its problems. My lifelong connected conscious awareness would undeniably cease to exist; the awareness that was me will enter the inky nothingness. The fact that my clone is walking and talking is completely orthogonal to this basic reality.
So if I tried to live with this idea “for a full week”, except at the end of the week I know I’d be shot and replaced, I’d be freaking out, and I think you would be too. Any satisfactory theory of personal identity has to avoid equating death with age-related change. I should read Reasons and Persons, but none of the paradoxes you link to undermine this ‘connected consciousness’ idea of personal identity (which differs from what Bernard Williams—and maybe Parfit?--would call psychological continuity). As I understand it, psychological continuity allows for any given awareness to end permanently as long as it’s somewhere replaced, but what I’m naively calling ‘connected consciousness’ doesn’t allow this.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don’t want to sleep forever!
Both parts you quoted are saying that the notion of personal identity I’m describing is (or at least can be) “fine to live with.” You might disagree with this, but I’m not following where the contradiction is between the two.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here—I don’t think anyone wants someone they care about to experience getting shot.)
I don’t find it obvious that there’s something meaningful or important about the “connected conscious experience.” If I imagine a future person with my personality and memories, it’s not clear to me that this person lacks anything that “Holden a moment from now” has.
I don’t think death is like sleeping forever, I think it’s like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
They contradict each other in the sense that your full theory, since it includes the particular consequence that vaporization is chill, is I think not something anyone but a small minority would be fine to live with. Quantum mechanics and atheism impose no such demands. It’s not too strong a claim to call this idea fine to live with when you’re just going about your daily life, ignoring the vaporization part. “Fine to live with” has to include every consequence, not just the ones that are indeed fine to live with. I interpreted the second quote as arguing that not just you but the general public could get used to this theory, in the same way they got used to quantum mechanics, because it doesn’t really affect their day-to-day. This is why I brought up your brain-scan hypothetical; here, the vaporization-is-chill consequence clearly affects their daily lives by offering a potentially life-or-death scenario.
Let’s say I die. A week later, a new medical procedure is able to revive me. What is the subjective conscious experience of the physical brain during this week? There is none—exactly like during a dreamless sleep. Of course death isn’t actually like sleeping forever; what’s relevant is that the conscious experience associated with the dead brain atom-pile matches that of the alive, sleeping brain, and also that of a rock.
It’s not the gunshot that matters here. If at the end of this week I knew I’d painlessly, peacefully pass away, only to be reassembled immediately nearby with my family none the wiser, I would be freaking out just as much as in the gunshot scenario. The shorter replacemet timescale (a second instead of a week) is the real distraction; it brings in some weird and mostly irrelevant intuitions, even though they’re functionally equivalent theories. Here’s what I think would happen in the every-second scenario, assuming that I knew your theory was correct: I would quickly realize (albeit over the course of many separate lives and with the thoughts of fundamentally different people) that each successive Martin dies immediately, and that in my one-second wake are thousands of former Martins sleeping dreamlessly. This may eventually become fine to live with only to the extent that the person living it doesn’t actually believe it—even if they believe they believe it. If I stayed true to my convictions and remained mentally alright, I’d probably spend most of my time staring at a picture of my family or something. This is why your call to try living with this idea for a week rings hollow to me. It’s like a deep-down atheist trying to believe in God for a week; the emotional reaction can’t be faked, even if you genuinely believe you believe in God.
I agree, this future person lacks nothing—from future person’s perspective. From the perspective of about-to-be vaporized present person, who has the strongest claim to their own identity, future person lack any meaningful connection to present person beyond the superficial, as present person’s brain’s conscious experience will soon be permanently nothing, a state that future person’s brain doesn’t share. Through my normal life, even if all my brain’s atoms eventually get replaced, it seems there is this ‘connected consciousness’ preserving one particular personal identity, rather than a new but otherwise identical one replacing it wholesale like in the teleporter hypothetical.
If I died, was medically revived a week later, and found a newly constructed Martin doing his thing, I would be pretty annoyed, and I think we’d both realize, given full mutual knowledge of our respective origins, that Martin’s personal identity belongs to me and not him.
I don’t intend these vague outlines to be an actual competing conception of personal identity, I have no idea what the real answer is. My core argument is that any theory that renders death-and-replacement functionally equivalent to normal life is unsatisfactory. You did inspire me to check out Reasons and Persons from the library; I hope I’m proven wrong by some thought experiment, and also that I’m not about to die.
This (often framed as being about the hard problem of consciousness) has long been a topic of argument in the rationalsphere. What I’ve observed is that some people have a strong intuition that they have a particular continuous subjective experience that constitutes what they think of as being “them”, and other people don’t. I don’t think this is because the people in the former group haven’t thought about it. As far as I can tell, very little progress has been made by either camp of converting the other to their preferred viewpoint, because the intuitions remain even after the arguments have been made.
I think this is pretty strong evidence that Holden and Parfit are p-zombies :)
Let’s say HT is Holden at time T.
Plausible Moral Rule (PMR): People cannot be morally blameworthy for actions that occurred before they existed.
By the PMR, for instance, HT cannot be blameworthy for a murder committed by Ted Bundy.
Now suppose that HT−1 committed murder on national television.
According to the view of personhood laid out in this post, plus the PMR, it seems like HT is not blameworthy for the murder committed by HT−1.
That seems whacky.
I think that seems whacky for precisely the reason that HT and HT−1 are the same person.
(Quick note: HT seems blameworthy for HT−1’s murder in a way that’s fundamentally different than the way we might say Holden’s parents are blameworthy, even if HT−1 is a minor.)
Me: *pours water on Holden’s head*
Holden: WTF??!
Me, 1 second later: It wasn’t me!
Holden, considers:
“Yeah it was! I saw you!”; or
“Fair enough.”
The reason I don’t agree that this is an issue is that I don’t accept the “plausible moral principle” (I alluded to this briefly in footnote 3 of the piece).
I titled the piece “what counts as death?” because it is focused on personal identity for that purpose. We need not accept “HT is not responsible for HT-1′s actions” in order to accept “HT-1 cares about HT analogously to a close relation, with continuity of experience being unimportant here” or ” HT-1 and HT do not have the kind of special relationship that powers a lot of fears about teleportation being death, and other paradoxes.”
Admittedly, part of the reason I feel OK preserving the normal “responsibility” concept while scrapping the normal “death” concept is that I’m a pragmatist about responsibility: to me, “HT is responsible for HT-1′s actions” means something like “Society should treat HT as responsible for HT-1′s actions; this will get good results.” My position would be a more awkward fit for someone who wanted to think of responsibility as something more fundamental, with a deep moral significance.
Thanks for your thoughts, Holden! Fun to engage.
re: The Pragmatic View of Blameworthiness/Responsibility
I’m compelled against your “pragmatic” view of moral blame by something like Moore’s open-question argument. It seems like we could first decide whether or not someone is blameworthy and then ask a further, separate question about whether they should be punished. For instance, imagine that Jack was involved in a car accident that resulted in Jill’s death. Each of the following questions seems independently sensible to me:
(a) Is Jack morally responsible (i.e., blameworthy) for Jill’s death?
(b) Assuming yes, is it morally right to punish Jack? (Set aside legal considerations for our purposes.)
If the pragmatic view about blameworthiness is correct, asking this second question (b) is as incoherent, vacuous, or nonsensical as saying, “I know there’s water in this glass, but is it H2O that’s in there?” But if determining that (a) Jack is blameworthy for Jill’s death still leaves open (b) the question of whether or not to punish Jack, then blameworthiness and punishment-worthiness are not identical (cf., the pragmatic view).[1]
re: Focus of the Piece was Death, not Moral Blame
I understood that the purpose of your post was to consider the implications of a certain view about personal identity continuity (PIC) for our conception of death. But I was trying to show that this particular view of PIC was incompatible with a commonsense view about moral blame. If they are in fact incompatible, and if the commonsense view about moral blame is right, then we have reason to reject this view of PIC (then don’t need to ask what its implications are for our notions of death).
So is that view of moral blame wrong?
It seems prima facie correct to me that Jack cannot be blameworthy for an action that occurred before Jack existed.
But it seems like you reject this idea. I’ll think harder about whether or not that view of blameworthiness is correct or not. For now:
I see how HT−1 can be (causally, morally) responsible for something that HT does, but I don’t see how HT can be responsible for something HT−1 does unless HT and HT−1 are the same person. For HT to be responsible for something HT−1 does, assuming they’re 2 different people, it seems like you’d have to have a concept of responsibility that is fully independent of causality (assuming no backwards-causation). I’m curious what view that would be.
As an aside, your Footnote 3 seems like a reason HT−1 might have for caring about the interests and wellbeing of HT, but it doesn’t seem like a reason why HT is in fact responsible for that other dude, HT−1 (if they’re 2 different people).
Thanks for your thoughts!
P.S. I’m new to all of this, so if anything about my comments is counter-normative, I’d be thrilled for some feedback!
We can further think about the separability of these two questions by asking (b) irrespective of (a). For instance, there might be pragmatic reasons to punish a car passenger for drinking alcohol even if there’s nothing blameworthy about a passenger drinking alcohol per se.
In response to the paragraph starting “I see how …” (which I can’t copy-paste easily due to the subscripts):
I think there are good pragmatic arguments for taking actions that effectively hold Ht responsible for the actions of Ht-1. For example, if Ht-1 committed premeditated murder, this gives some argument that Ht is more likely to harm others than the average person, and should be accordingly restricted for their benefit. And it’s possible that the general practice of punishing Ht for Ht-1′s actions would generally deter crime, while not creating other perverse effects (more effectively than punishing someone else for Ht-1′s actions).
In my view, that’s enough—I generally don’t buy into the idea that there is something fundamental to the idea of “what people deserve” beyond something like “how people should be treated as part of the functioning of a healthy society.”
But if I didn’t hold this view, I could still just insist on splitting the idea of “the same person” into two different things: it seems coherent to say that Ht-1 and Ht are the same person in one sense and different people in another sense. My main claim is that “myself 1 second from now” and “myself now” are different people in the same sense that “a copy of myself created on another planet” and “myself” are different people; we could simultaneously say that both pairs can be called the “same person” in a different sense, one used for responsibility. (And indeed, it does seem reasonable to me that a copy would be held responsible for actions that the original took before “forking.”)