Since any given future person only has an infinitesimally small chance of coming into existence, they have an infinitesimally weak claim to aid.
I think this is confused. Imagine we consider each person different over time, a la personites, and consider the distribution of possible people I will be next year. There are an incredibly large number of possible changes which could occur which would change my mental state, and depending on what I eat, the physical composition of my body. Does each of these future me have only an infinitesimal claim, and therefore according to contractualism, have almost no importance compared to any claim that exists before that time—and therefore you can only care about the immediate future, and never prioritize what will affect me in a year over what will affect some other person in 10 minutes?
Hi David. It’s probably true that if you accept that picture of persons, then the implications of contractualism are quite counterintuitive. Of course, I suspect that most contractualists reject that picture.
I don’t see a coherent view of people that doesn’t have some version of this. My firstborn child was not a specific person until he was conceived, even when I was planning with my wife to have a child. As a child, who he is and who he will be is still very much being developed over time. But who I will be in 20 years is also still very much being determined—and I hope people reason about their contractualist obligations in ways that are consistent with considering that people change over time in ways that aren’t fully predictable in advance.
More to the point, the number of possible mes in 20 years, however many there are, should collapse to having a value exactly equal to me—possibly discounted into the future. Why is the same not true of future people, where the number of different possible people each have almost zero claim, and it doesn’t get aggregated at all?
Hi David. There are two ways of talking about personal identity over time. There’s the ordinary way, where we’re talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there’s “numerical identity” way, where we’re talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you’re running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I’ll just say that I meant only to be talking about the “numerical identity” sense of sameness over time, so we don’t get the problem you’re describing in the intra-individual case. If the latter, then that’s a pretty big philosophical dispute that we’re unlikely to resolve in a comment thread!
I don’t necessarily care about the concept of personal identity over time, but I think there’s a very strong decision-making foundation for considering uncertainty about future states. In one framing, I buy insurance because in some future states it is very valuable, and in other future states it was not. I am effectively transferring money from one future version of myself to another. That’s sticking with a numerical identity view of my self, but it’s critical to consider different futures despite not having a complex view of what makes me “the same person”.
But I think that if you embrace the view you present as obvious for contractualists, where we view future people fundamentally differently than present people, and do not allow consideration of different potential futures, you end up with some very confused notions about how to plan under uncertainty, and can never prioritize any types of investments that pay off primarily in even the intermediate-term future. For example, mitigating emissions for climate change should be ignored, because we can do more good for current people by mitigating harms rather than preventing them, and should emit more and ignore the fact that this will, with certainty, make the future worse, because those people don’t have much of a moral claim. And from a consequentialist viewpoint—which I think is relevant even if we’re not accepting it as a guiding moral principle—we’d all be much, much worse off if this sort of reasoning had been embraced in the past.
I think this is confused. Imagine we consider each person different over time, a la personites, and consider the distribution of possible people I will be next year. There are an incredibly large number of possible changes which could occur which would change my mental state, and depending on what I eat, the physical composition of my body. Does each of these future me have only an infinitesimal claim, and therefore according to contractualism, have almost no importance compared to any claim that exists before that time—and therefore you can only care about the immediate future, and never prioritize what will affect me in a year over what will affect some other person in 10 minutes?
Hi David. It’s probably true that if you accept that picture of persons, then the implications of contractualism are quite counterintuitive. Of course, I suspect that most contractualists reject that picture.
I don’t see a coherent view of people that doesn’t have some version of this. My firstborn child was not a specific person until he was conceived, even when I was planning with my wife to have a child. As a child, who he is and who he will be is still very much being developed over time. But who I will be in 20 years is also still very much being determined—and I hope people reason about their contractualist obligations in ways that are consistent with considering that people change over time in ways that aren’t fully predictable in advance.
More to the point, the number of possible mes in 20 years, however many there are, should collapse to having a value exactly equal to me—possibly discounted into the future. Why is the same not true of future people, where the number of different possible people each have almost zero claim, and it doesn’t get aggregated at all?
Hi David. There are two ways of talking about personal identity over time. There’s the ordinary way, where we’re talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there’s “numerical identity” way, where we’re talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you’re running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I’ll just say that I meant only to be talking about the “numerical identity” sense of sameness over time, so we don’t get the problem you’re describing in the intra-individual case. If the latter, then that’s a pretty big philosophical dispute that we’re unlikely to resolve in a comment thread!
I don’t necessarily care about the concept of personal identity over time, but I think there’s a very strong decision-making foundation for considering uncertainty about future states. In one framing, I buy insurance because in some future states it is very valuable, and in other future states it was not. I am effectively transferring money from one future version of myself to another. That’s sticking with a numerical identity view of my self, but it’s critical to consider different futures despite not having a complex view of what makes me “the same person”.
But I think that if you embrace the view you present as obvious for contractualists, where we view future people fundamentally differently than present people, and do not allow consideration of different potential futures, you end up with some very confused notions about how to plan under uncertainty, and can never prioritize any types of investments that pay off primarily in even the intermediate-term future. For example, mitigating emissions for climate change should be ignored, because we can do more good for current people by mitigating harms rather than preventing them, and should emit more and ignore the fact that this will, with certainty, make the future worse, because those people don’t have much of a moral claim. And from a consequentialist viewpoint—which I think is relevant even if we’re not accepting it as a guiding moral principle—we’d all be much, much worse off if this sort of reasoning had been embraced in the past.