Hi David. There are two ways of talking about personal identity over time. There’s the ordinary way, where we’re talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there’s “numerical identity” way, where we’re talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you’re running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I’ll just say that I meant only to be talking about the “numerical identity” sense of sameness over time, so we don’t get the problem you’re describing in the intra-individual case. If the latter, then that’s a pretty big philosophical dispute that we’re unlikely to resolve in a comment thread!
I don’t necessarily care about the concept of personal identity over time, but I think there’s a very strong decision-making foundation for considering uncertainty about future states. In one framing, I buy insurance because in some future states it is very valuable, and in other future states it was not. I am effectively transferring money from one future version of myself to another. That’s sticking with a numerical identity view of my self, but it’s critical to consider different futures despite not having a complex view of what makes me “the same person”.
But I think that if you embrace the view you present as obvious for contractualists, where we view future people fundamentally differently than present people, and do not allow consideration of different potential futures, you end up with some very confused notions about how to plan under uncertainty, and can never prioritize any types of investments that pay off primarily in even the intermediate-term future. For example, mitigating emissions for climate change should be ignored, because we can do more good for current people by mitigating harms rather than preventing them, and should emit more and ignore the fact that this will, with certainty, make the future worse, because those people don’t have much of a moral claim. And from a consequentialist viewpoint—which I think is relevant even if we’re not accepting it as a guiding moral principle—we’d all be much, much worse off if this sort of reasoning had been embraced in the past.
Hi David. There are two ways of talking about personal identity over time. There’s the ordinary way, where we’re talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there’s “numerical identity” way, where we’re talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you’re running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I’ll just say that I meant only to be talking about the “numerical identity” sense of sameness over time, so we don’t get the problem you’re describing in the intra-individual case. If the latter, then that’s a pretty big philosophical dispute that we’re unlikely to resolve in a comment thread!
I don’t necessarily care about the concept of personal identity over time, but I think there’s a very strong decision-making foundation for considering uncertainty about future states. In one framing, I buy insurance because in some future states it is very valuable, and in other future states it was not. I am effectively transferring money from one future version of myself to another. That’s sticking with a numerical identity view of my self, but it’s critical to consider different futures despite not having a complex view of what makes me “the same person”.
But I think that if you embrace the view you present as obvious for contractualists, where we view future people fundamentally differently than present people, and do not allow consideration of different potential futures, you end up with some very confused notions about how to plan under uncertainty, and can never prioritize any types of investments that pay off primarily in even the intermediate-term future. For example, mitigating emissions for climate change should be ignored, because we can do more good for current people by mitigating harms rather than preventing them, and should emit more and ignore the fact that this will, with certainty, make the future worse, because those people don’t have much of a moral claim. And from a consequentialist viewpoint—which I think is relevant even if we’re not accepting it as a guiding moral principle—we’d all be much, much worse off if this sort of reasoning had been embraced in the past.