We do effectively discount the value of future lives, based on our uncertainty about the future. If I’m trying to do something today that will be helpful 100 years from now, I don’t know if my efforts will actually be relevant in 100 years… I don’t even know for certain if humanity will still be around! So it’s reasonable to discount our future plans because we don’t know how the future will unfold. But that’s all just due to our own uncertainty. Philosophically speaking, it doesn’t make much sense to discount the value of future lives purely because they’re far away from us in time.
The situation with helping future generations is just like the situation of helping people who are far away. It doesn’t make much moral sense to say that someone’s life is objectively less valuable just because they’re far away. When we learn about a disaster that happened to people far away from us, it usually feels abstract and small compared to if a similar disaster struck nearby—but of course, to the people who experienced it firsthand, the experience was perfectly vivid and intense! (If we wanted to check for ourselves, we could travel there and see.) Similarly, if something is absolutely guaranteed to happen a decade from now, that feels abstract and small compared to if it was going to happen tomorrow. But eventually people will be living through it as it happens, and it’ll be perfectly vivid and real to them! (It will even feel real to us too, if we just wait around long enough!) That’s why most philosophers think it’s unjustified to discount the moral value of the future—what most people really mean by “discounting the future” is “discounting uncertainty”, and there are often better ways to do that than just applying a compounding yearly discount rate to all of eternity.
...And now, having spent my 300 word budget, some notes / follow-up:
Q: Okay, we can call it “uncertainty weighting” but isn’t that just the same thing? A: Well, it’s an important moral distinction. Also, the traditional approach of using a compounding yearly percentage works well in finance, but it starts giving strange answers in other contexts. (See Pablo’s example about how a 1% discount rate implies a huge difference between a death in 10,000 years versus a death in 20,000 years, when intuitively most people would say the two deaths are probably about equally bad.)
Q: Isn’t there so much uncertainty about the future that it’s worthless to plan for things over thousand-year timescales? A: There’s certainly a lot of uncertainty! Maybe you’re right, and the world is so complex and chaotic that it’s literally impossible to know what actions are helpful or harmful for the far future—a situation philosophers call “moral cluelessness”. On the other hand, when you actually start researching different potential actions, it seems like there are things we can do that might really help the future a lot. Reducing “existential risk” is one of the best examples: it would be really bad if everybody died in the next century or two, and human civilization ended forever. If we can help avoid going extinct, that’s something concrete we can work on in the near-term which would benefit civilization far into the future. But different experts have different opinions on whether we’re in a situation of “moral cluelessness” or not.
Q: I’ve heard that humans actually discount the future even MORE than exponentially… they discount hyperbolically! Doesn’t this show that highly valuing the present is a built-in human cultural universal, a “pure time preference”? A: Glad you brought that up! This is one of my favorite facts—hyperbolic discounting is a famous example of human irrationality and impatience, and yet it might turn out to be rational behavior after all! Exponential discounting is rational when you are dealing with a constant, known rate of risk (called a hazard rate). That’s a good approximation in some well-characterized financial situations. But in the real world, there are many times when we have no idea what the true rate of risk will be! And in these situations, when we have uncertainty about the value of the hazard rate, the math actually tells us that we should use hyperbolic discounting. (This also resolves the “death in 10K vs 20K years” paradox, as the link shows.) So, it’s not that humans are born with a “pure time preference”. As I see it, hyperbolic discounting actually reinforces the idea that what we’re really doing is rationally discounting our own uncertainty about the future, not anything about events getting intrinsically less important merely because they’re far away.
We do effectively discount the value of future lives, based on our uncertainty about the future. If I’m trying to do something today that will be helpful 100 years from now, I don’t know if my efforts will actually be relevant in 100 years… I don’t even know for certain if humanity will still be around! So it’s reasonable to discount our future plans because we don’t know how the future will unfold. But that’s all just due to our own uncertainty. Philosophically speaking, it doesn’t make much sense to discount the value of future lives purely because they’re far away from us in time.
The situation with helping future generations is just like the situation of helping people who are far away. It doesn’t make much moral sense to say that someone’s life is objectively less valuable just because they’re far away. When we learn about a disaster that happened to people far away from us, it usually feels abstract and small compared to if a similar disaster struck nearby—but of course, to the people who experienced it firsthand, the experience was perfectly vivid and intense! (If we wanted to check for ourselves, we could travel there and see.) Similarly, if something is absolutely guaranteed to happen a decade from now, that feels abstract and small compared to if it was going to happen tomorrow. But eventually people will be living through it as it happens, and it’ll be perfectly vivid and real to them! (It will even feel real to us too, if we just wait around long enough!) That’s why most philosophers think it’s unjustified to discount the moral value of the future—what most people really mean by “discounting the future” is “discounting uncertainty”, and there are often better ways to do that than just applying a compounding yearly discount rate to all of eternity.
...And now, having spent my 300 word budget, some notes / follow-up:
Q: Okay, we can call it “uncertainty weighting” but isn’t that just the same thing? A: Well, it’s an important moral distinction. Also, the traditional approach of using a compounding yearly percentage works well in finance, but it starts giving strange answers in other contexts. (See Pablo’s example about how a 1% discount rate implies a huge difference between a death in 10,000 years versus a death in 20,000 years, when intuitively most people would say the two deaths are probably about equally bad.)
Q: Isn’t there so much uncertainty about the future that it’s worthless to plan for things over thousand-year timescales? A: There’s certainly a lot of uncertainty! Maybe you’re right, and the world is so complex and chaotic that it’s literally impossible to know what actions are helpful or harmful for the far future—a situation philosophers call “moral cluelessness”. On the other hand, when you actually start researching different potential actions, it seems like there are things we can do that might really help the future a lot. Reducing “existential risk” is one of the best examples: it would be really bad if everybody died in the next century or two, and human civilization ended forever. If we can help avoid going extinct, that’s something concrete we can work on in the near-term which would benefit civilization far into the future. But different experts have different opinions on whether we’re in a situation of “moral cluelessness” or not.
Q: I’ve heard that humans actually discount the future even MORE than exponentially… they discount hyperbolically! Doesn’t this show that highly valuing the present is a built-in human cultural universal, a “pure time preference”? A: Glad you brought that up! This is one of my favorite facts—hyperbolic discounting is a famous example of human irrationality and impatience, and yet it might turn out to be rational behavior after all! Exponential discounting is rational when you are dealing with a constant, known rate of risk (called a hazard rate). That’s a good approximation in some well-characterized financial situations. But in the real world, there are many times when we have no idea what the true rate of risk will be! And in these situations, when we have uncertainty about the value of the hazard rate, the math actually tells us that we should use hyperbolic discounting. (This also resolves the “death in 10K vs 20K years” paradox, as the link shows.) So, it’s not that humans are born with a “pure time preference”. As I see it, hyperbolic discounting actually reinforces the idea that what we’re really doing is rationally discounting our own uncertainty about the future, not anything about events getting intrinsically less important merely because they’re far away.
Thanks for your submission Jackson :)