The view that we should discount the moral value of future people is often motivated by an analogy to discounting in financial contexts. It makes sense to discount a cash flow in proportion to how removed it is from the present, because money compounds over time and because risk increases with time. However, these are instrumental considerations for discounting the future. Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted. There are good reasons for thinking that this sort of “intrinsic discounting” is indefensible.
First, intrinsic discounting has very counterintuitive implications. Suppose a government decides to get rid of radioactive waste without taking the necessary safety precautions. A girl is exposed to this waste and dies as a result. This death is a moral tragedy regardless of whether the girl lives now or 10,000 years from now. Yet a pure discount rate of 1% implies that the death of the present girl is more than 1043 times as bad as the death of the future girl.
Second, the main argument for intrinsic discounting is that people do appear to exhibit a degree of pure time preference. But while the models discount the future exponentially, people discount the future hyperbolically. So people’s preferences do not support discounting as it is usually modeled. More fundamentally, relying on what present people prefer to decide whether the future should be discounted begs the question against opponents of discounting.
Finally, an analogy with space seems to undermine intrinsic discounting. Suppose a flight from Paris to New York crashes, killing everyone on board. Someone in New York learns about the incident and says: “To decide how much to lament this tragedy, I must first learn how far away the plane was from me when the accident occurred.” This comment seems bizarre. But it is analogous to saying that, in deciding how much to value people in the future, we first need to know how far away they are from us in time. As the philosopher Derek Parfit once remarked, “Remoteness in time has, in itself, no more significance than remoteness in space.”
[I’ve shortened the comment after noticing that it exceeded the requested length.]
money compounds over time and because risk increases with time...However, these are instrumental considerations for discounting the future.
Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted.
Is this really true?
No, I mean seriously, is this true? I’m dumb and not a philosopher.
I don’t intrinsically discount the value of people or morally relevant entities. In fact, it would take me time to even come up with reasons why anyone would discount anyone else, whether they are far away in space or time, or alien to us. Like, this literally seems like the definition of evil?
So this seems to make me really incompetent at coming up with an answer to this post.
Now, using knowledge of x-risk acquired from Youtube cartoons, there are kinds of x-risk we can’t prevent. For example, being in an alien zoo, a simulation, or a “false vacuum”, all create forms of x-risk we can’t prevent or even know.
Now, given these x-risks, the reason why we might discount the future is for instrumental reasons, and at least some of these pretty much follow economic arguments: if we think there is a 0.0001% of vacuum decay or some catastrophe that we can’t prevent or even understand, this immediately bounds the long term future.
Now, note that if we parameterize this low percentage (0.0001% or something), it’s likely we can setup some model where current program of longtermism or x-risk reduction, or even much larger, powerful versions, are fully justified, for pretty reasonable ranges of this %.
I’m happy to respond, but am reluctant to do so here, since the original post stated that “We don’t want other users discussing other people’s answers, so we will moderate away those comments.”
The view that we should discount the moral value of future people is often motivated by an analogy to discounting in financial contexts. It makes sense to discount a cash flow in proportion to how removed it is from the present, because money compounds over time and because risk increases with time. However, these are instrumental considerations for discounting the future. Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted. There are good reasons for thinking that this sort of “intrinsic discounting” is indefensible.
First, intrinsic discounting has very counterintuitive implications. Suppose a government decides to get rid of radioactive waste without taking the necessary safety precautions. A girl is exposed to this waste and dies as a result. This death is a moral tragedy regardless of whether the girl lives now or 10,000 years from now. Yet a pure discount rate of 1% implies that the death of the present girl is more than 1043 times as bad as the death of the future girl.
Second, the main argument for intrinsic discounting is that people do appear to exhibit a degree of pure time preference. But while the models discount the future exponentially, people discount the future hyperbolically. So people’s preferences do not support discounting as it is usually modeled. More fundamentally, relying on what present people prefer to decide whether the future should be discounted begs the question against opponents of discounting.
Finally, an analogy with space seems to undermine intrinsic discounting. Suppose a flight from Paris to New York crashes, killing everyone on board. Someone in New York learns about the incident and says: “To decide how much to lament this tragedy, I must first learn how far away the plane was from me when the accident occurred.” This comment seems bizarre. But it is analogous to saying that, in deciding how much to value people in the future, we first need to know how far away they are from us in time. As the philosopher Derek Parfit once remarked, “Remoteness in time has, in itself, no more significance than remoteness in space.”
[I’ve shortened the comment after noticing that it exceeded the requested length.]
Thanks for your submission Pablo :)
Is this really true?
No, I mean seriously, is this true? I’m dumb and not a philosopher.
I don’t intrinsically discount the value of people or morally relevant entities. In fact, it would take me time to even come up with reasons why anyone would discount anyone else, whether they are far away in space or time, or alien to us. Like, this literally seems like the definition of evil?
So this seems to make me really incompetent at coming up with an answer to this post.
Now, using knowledge of x-risk acquired from Youtube cartoons, there are kinds of x-risk we can’t prevent. For example, being in an alien zoo, a simulation, or a “false vacuum”, all create forms of x-risk we can’t prevent or even know.
Now, given these x-risks, the reason why we might discount the future is for instrumental reasons, and at least some of these pretty much follow economic arguments: if we think there is a 0.0001% of vacuum decay or some catastrophe that we can’t prevent or even understand, this immediately bounds the long term future.
Now, note that if we parameterize this low percentage (0.0001% or something), it’s likely we can setup some model where current program of longtermism or x-risk reduction, or even much larger, powerful versions, are fully justified, for pretty reasonable ranges of this %.
Hi Charles,
I’m happy to respond, but am reluctant to do so here, since the original post stated that “We don’t want other users discussing other people’s answers, so we will moderate away those comments.”