I don’t think the right response is to directly respond to this claim. I think the right response is to ask a question aimed to identify what the crux of our disagreement is, and then to respond directly to that. In my experience of talking to many people who make this sort of claim, especially those matching the description given in the post, it is a minority who literally hold the view ‘we should have no pure rate of time preference’, instead most have some other type of reason for the intuition, which I may or may not disagree with in practice.
Would you accept answers of the form:
Question which establishes whether the claim is ‘we should have a rate of pure time preference [1]’ or ‘we have practical reasons to weight effects which are near in time more highly in our decision making, even if we are impartial consequentialists with no pure rate of time preference, for example due to uncertainty about the reliability of long-term forecasts, belief that it is impossible to reduce the probability of existential catastrophe per unit time to 0 etc. [2]’
Suggested response 1, if the person holds position [1]
Suggested response 2, or, more productively, suggested avenues for further discussion, if the person holds one of several versions of position [2]
?
I’m not promising to write such an answer in either case, and accepting answers of the above form doesn’t hugely change the probability that I’ll do so, but because I think that the approach above is the best response in this sort of situation, I think it would be great if others were encouraged to consider responses of this form.
You’re explicitly branded as an EA organisation. When you’re communicating this answer to people, how are you going to handle the fact that different people in EA have very different views about the value of the future?
EAs seem to have different views about the value of the future in the sense that they disagree about population ethics (i.e. how to evaluate outcomes that differ in the numbers or the identities of the people involved). To my knowledge, there are no significant disagreements concerning time discounting (i.e. how much, if at all, to discount welfare on the basis of its temporal location). For example, I’m not aware of anyone who thinks that a LLIN distributed a year from now does less good than a LLIN distributed now because the welfare of the first recipient, by virtue of being more removed from the present, matters less than the welfare of the second recipient.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.
If I were an onlooker I might be thinking “hmm looks like these people are trying to settle difficult EA questions in certain positions and are going to advertise those as the correct answers when there is still a lot of unsettled debate”
I think a good answer to the prompt would acknowledge the debate in EA and that people have different views.
I ought to clarify: For the purposes we’ll be using our FAQ for we want to be outlining and defending our urgent longtermist view. That’s why in the prompt I’m looking for answers that fall on one particular side of the view (i.e. the side that best represents the views of our organisation and goals which are urgent longtermist) (if I weren’t doing this bounty I would just be writing an answer that fell on this side myself! And I’m looking to outsource my work here)
I think this is a very different set of goals and views that the EA movement as a whole, and we’re not trying to represent those—sorry for any confusion! I should have specified more clearly what our use case of the FAQ is. For example, I think this would probably be bad as a FAQ on EA.org.
I also think that a lot of these questions will be unsettled. Nevertheless for this bounty I want people to be able to indicate their tentative best guess answer to the question in a decision relevant way without getting caught in the failure mode of just providing a survey of different views.
I think that the valuable discussion and debate over the answers to the question should continue elsewhere :)
I have now made some small clarifications to the original post. If we decide to continue with the bounty program then I’ll try and do more clarifications to our aims and why we’re doing it this way :)
[this a comment about the post/project, not an answer to the question about moral discounting]
I’m curious—when talking to people new to EA, have you heard that question a lot, in those words and terms?
I’m asking because—and I might be typical-minding here—I’d be surprised if most people who are new to longtermism have the explicit belief ‘people in the future have less moral value than people in the present’. In particular, the language of moral discounting sounds very EA-ish to me. I imagine that if you ask most people who are sceptical to longtermism ‘so do future people have less moral value than present people?‘, they’d be like ‘of course not, but [insert other argument for why it nonetheless makes more sense to focus on the present.’
(Analogously, imagine an EA having a debate with someone who thinks that we should focus on helping people in our local communities. At one point the EA says ‘so, do you think that people in other countries have less moral value than people in your community?’
I find it hard to imagine that the local-communitarian would say ‘yeah! Screw people in other countries!’ [even if from an EA perspective, their actions and beliefs would seem to entail this attitude]
I find it more likely that they would say something like ‘of course people everywhere have moral value, but it’s my job to help people in my community, and people in other countries should be helped by people in their own communities’. And they might give further reasons for why they think this.)
The view that we should discount the moral value of future people is often motivated by an analogy to discounting in financial contexts. It makes sense to discount a cash flow in proportion to how removed it is from the present, because money compounds over time and because risk increases with time. However, these are instrumental considerations for discounting the future. Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted. There are good reasons for thinking that this sort of “intrinsic discounting” is indefensible.
First, intrinsic discounting has very counterintuitive implications. Suppose a government decides to get rid of radioactive waste without taking the necessary safety precautions. A girl is exposed to this waste and dies as a result. This death is a moral tragedy regardless of whether the girl lives now or 10,000 years from now. Yet a pure discount rate of 1% implies that the death of the present girl is more than 1043 times as bad as the death of the future girl.
Second, the main argument for intrinsic discounting is that people do appear to exhibit a degree of pure time preference. But while the models discount the future exponentially, people discount the future hyperbolically. So people’s preferences do not support discounting as it is usually modeled. More fundamentally, relying on what present people prefer to decide whether the future should be discounted begs the question against opponents of discounting.
Finally, an analogy with space seems to undermine intrinsic discounting. Suppose a flight from Paris to New York crashes, killing everyone on board. Someone in New York learns about the incident and says: “To decide how much to lament this tragedy, I must first learn how far away the plane was from me when the accident occurred.” This comment seems bizarre. But it is analogous to saying that, in deciding how much to value people in the future, we first need to know how far away they are from us in time. As the philosopher Derek Parfit once remarked, “Remoteness in time has, in itself, no more significance than remoteness in space.”
[I’ve shortened the comment after noticing that it exceeded the requested length.]
money compounds over time and because risk increases with time...However, these are instrumental considerations for discounting the future.
Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted.
Is this really true?
No, I mean seriously, is this true? I’m dumb and not a philosopher.
I don’t intrinsically discount the value of people or morally relevant entities. In fact, it would take me time to even come up with reasons why anyone would discount anyone else, whether they are far away in space or time, or alien to us. Like, this literally seems like the definition of evil?
So this seems to make me really incompetent at coming up with an answer to this post.
Now, using knowledge of x-risk acquired from Youtube cartoons, there are kinds of x-risk we can’t prevent. For example, being in an alien zoo, a simulation, or a “false vacuum”, all create forms of x-risk we can’t prevent or even know.
Now, given these x-risks, the reason why we might discount the future is for instrumental reasons, and at least some of these pretty much follow economic arguments: if we think there is a 0.0001% of vacuum decay or some catastrophe that we can’t prevent or even understand, this immediately bounds the long term future.
Now, note that if we parameterize this low percentage (0.0001% or something), it’s likely we can setup some model where current program of longtermism or x-risk reduction, or even much larger, powerful versions, are fully justified, for pretty reasonable ranges of this %.
I’m happy to respond, but am reluctant to do so here, since the original post stated that “We don’t want other users discussing other people’s answers, so we will moderate away those comments.”
We often seem to value the present more than the future; but I’ll argue that present and future people deserve similar moral treatment—by contrasting individual vs. moral decision-making, and impartially regarding their interests.
I discount my own future welfare at a high rate because my decisions entail relevant opportunity costs: while wealth accumulates, health declines – as individual lives are uncertain and short. However, time passing itself doesn’t make things less important: if I somehow traveled 200 years ahead, my life wouldn’t become less valuable, just like it wasn’t worthier yesterday.
Now, morality. Though you can base your relationships on proximity, does it matter from a moral perspective if people live here or elsewhere? In one or 10^10 days? If you regard all impartially, then discounting for uncertainty should use very low rates, since (unlike me) humanity can last eons. And if you think extinction is terrible, even in a million years: Why would it be so if moral value decreased with time?
Perhaps you adopt a non-consequentialist theory, based on, e.g., reciprocity, and think future generations can’t benefit us. But our lives benefit from extended chains of cooperation characterizing our cultures and economies – we owe those who tamed the fire, and those who’ll pay our long-term debts – and from the expectation of their remaining.
We could see this as some sort of community. This idea is hard to internalize, but sometimes I almost feel it, like in Dear Theodosia: “[...] we’ll give the world to you / And you’ll blow us all away.” We usually want our descendants to surpass us. Thus, assuming they’ll behave similarly and want the same for their successors… Shouldn’t we want the same for every following generation? Love is not “transitive,” but perhaps caring should be.
We do effectively discount the value of future lives, based on our uncertainty about the future. If I’m trying to do something today that will be helpful 100 years from now, I don’t know if my efforts will actually be relevant in 100 years… I don’t even know for certain if humanity will still be around! So it’s reasonable to discount our future plans because we don’t know how the future will unfold. But that’s all just due to our own uncertainty. Philosophically speaking, it doesn’t make much sense to discount the value of future lives purely because they’re far away from us in time.
The situation with helping future generations is just like the situation of helping people who are far away. It doesn’t make much moral sense to say that someone’s life is objectively less valuable just because they’re far away. When we learn about a disaster that happened to people far away from us, it usually feels abstract and small compared to if a similar disaster struck nearby—but of course, to the people who experienced it firsthand, the experience was perfectly vivid and intense! (If we wanted to check for ourselves, we could travel there and see.) Similarly, if something is absolutely guaranteed to happen a decade from now, that feels abstract and small compared to if it was going to happen tomorrow. But eventually people will be living through it as it happens, and it’ll be perfectly vivid and real to them! (It will even feel real to us too, if we just wait around long enough!) That’s why most philosophers think it’s unjustified to discount the moral value of the future—what most people really mean by “discounting the future” is “discounting uncertainty”, and there are often better ways to do that than just applying a compounding yearly discount rate to all of eternity.
...And now, having spent my 300 word budget, some notes / follow-up:
Q: Okay, we can call it “uncertainty weighting” but isn’t that just the same thing? A: Well, it’s an important moral distinction. Also, the traditional approach of using a compounding yearly percentage works well in finance, but it starts giving strange answers in other contexts. (See Pablo’s example about how a 1% discount rate implies a huge difference between a death in 10,000 years versus a death in 20,000 years, when intuitively most people would say the two deaths are probably about equally bad.)
Q: Isn’t there so much uncertainty about the future that it’s worthless to plan for things over thousand-year timescales? A: There’s certainly a lot of uncertainty! Maybe you’re right, and the world is so complex and chaotic that it’s literally impossible to know what actions are helpful or harmful for the far future—a situation philosophers call “moral cluelessness”. On the other hand, when you actually start researching different potential actions, it seems like there are things we can do that might really help the future a lot. Reducing “existential risk” is one of the best examples: it would be really bad if everybody died in the next century or two, and human civilization ended forever. If we can help avoid going extinct, that’s something concrete we can work on in the near-term which would benefit civilization far into the future. But different experts have different opinions on whether we’re in a situation of “moral cluelessness” or not.
Q: I’ve heard that humans actually discount the future even MORE than exponentially… they discount hyperbolically! Doesn’t this show that highly valuing the present is a built-in human cultural universal, a “pure time preference”? A: Glad you brought that up! This is one of my favorite facts—hyperbolic discounting is a famous example of human irrationality and impatience, and yet it might turn out to be rational behavior after all! Exponential discounting is rational when you are dealing with a constant, known rate of risk (called a hazard rate). That’s a good approximation in some well-characterized financial situations. But in the real world, there are many times when we have no idea what the true rate of risk will be! And in these situations, when we have uncertainty about the value of the hazard rate, the math actually tells us that we should use hyperbolic discounting. (This also resolves the “death in 10K vs 20K years” paradox, as the link shows.) So, it’s not that humans are born with a “pure time preference”. As I see it, hyperbolic discounting actually reinforces the idea that what we’re really doing is rationally discounting our own uncertainty about the future, not anything about events getting intrinsically less important merely because they’re far away.
(Note: this comment will probably draw heavily from The Precipice, because that’s by far the best argument I’ve heard against temporal discounting. I don’t have a copy of the book with me, so if this is close enough to that explanation you can just disqualify me :P)
In normal situations, it works well to discount things like money based off the time it takes to get them. After all, money is worth less as time goes on, due to inflation; something might happen to you, so you can’t collect the money later on; and there’s an inherent uncertainty in whether or not you’ll actually get the reward you’re promised, later. Human lives aren’t subject to inflation—pain and suffering are pain and suffering across time, whether or not there are more people.Something might happen to the world, and I agree that it’s important to discount based on that, but that discounting works out to be relatively small in the grand scheme of things. People in the long-term future are still inherently valuable because they’re people, and their collective value is very important—and thus it should be a major consideration for people living now.
There’s one thing I’ve been ignoring, and it’s something called “pure time preference,” essentially the inherent preference for having something earlier than later just because of its position in time. Pure time preference shouldn’t be applied to the long term future for one simple reason—if you tried to apply a reasonable discount rate based on it *back* to Ancient Rome, the consuls would conclude that one moment of suffering for one of their subjects was worth as much as the entire human race today suffering for their entire lives.
Basically, we should discount the moral value of people based on the catastrophe risk—the chance that the world ends in the time from now to then, and the gains we strove for won’t mean anything. (Which is a relatively small discount, all things considered, keeping substantial amounts of value in the longterm future—and gets directly reduced by working on existential risk) But it’s not fair to people in the future to discount based on anything else—like pure time preference, or inflation—because given no catastrophe until then, their lives, joys, pains, and suffering are worth just as much as people today, or people living in Ancient Rome.
Heavily relying on preexisting content is okay! I expect a good answer might just come from reviewing the existing literature and mashing together the content
Note: I’m happy to hear feedback via DM, if you have any :)
That’s a question that comes up a lot, and it makes sense. It’s similar to the question of how much we should care about people in a different country. After all, it feels natural to care more about people close to you. But I think if you deconstruct this moral intuition, you’ll find that we don’t actually discount the value of a human life based on where or when they live. Instead, it’s simply easier to effectively help those close to us, so we are predisposed to do that.
It’s hard to know that your intervention will actually work the way you intend, if you plan it in the future. The future is hard to predict, and circumstances might be different then. If we look to the past as an example, this is less of a problem. Do you think that a rich person in ancient Rome, 2000 years ago, saving someone of starvation, has done a better thing than someone today who would save someone of starvation? Probably not, since they did pretty much the same thing: save someone’s life.
Now sure, the person that was saved 2000 years ago possibly impacted the world a lot in this time, and probably in a good way. Measuring total impact, the Roman philanthropist achieved more, through an additional 2000 years of ripple effects. This might be a reason to help people now instead of later, but it still doesn’t mean we should value their own lives less.
So, we should be somewhat biased to help people here and now, since we know that it works and that they will in turn have a longer future to positively affect. But I think the intrinsic value of their own lives does not depend on when they live.
Focusing on people who’ll live centuries or millennia from now can definitely feel weird, especially when there’s so much going on in the world today that seems so pressing. But I like to think about our current situation in the context of the history that got us here. There’s a lot to commend about our world and we owe much of that to people who came before us, and at least some of them were thinking pretty selflessly about prosperity. And likewise, there’s a lot that seems like it could’ve been better if previous generations had acted with a bit more foresight. One example is the founding of the US. The founders probably could’ve served their present generation pretty well by being less thoughtful and relying on the leadership of say George Washington as a powerful executive. Instead, while it was extremely far from perfect, they deliberated pretty hard to set a different precedent and come up with a system that would be good after their political cohort was long gone. On the flip side, the greenhouse effect was first proposed in the late 19th century, and certainly by the 1970s and 1980s people knew enough about climate change to at least have started investing in greener tech. But this is the classic story of shortsightedness. And look, if I were say a philanthropist or aid org back then, I probably would’ve thought “can we really think about funding low carbon energy when there’s millions of refugees and starving people, the threat of war between the US and Soviets, etc?” But had people back then gotten the ball rolling on sensible climate policies, just think how much better our current world, let alone the next century, could have been. Ironically, a lot of the problems that probably seemed more pressing back then, like poverty or conflict, would be a lot better now if people had had longer time horizons. So it seems like if we’re interested in helping people, one solid approach could be to think about how we can be good ancestors.
It is understandable that we want to prioritise those who are closer to us. It’s natural, instinctive, and often helps society to function—like when parents prioritise their kids. But it can also create harmful barriers and division.
History is full of examples of humans devaluing those who seem different or distant to them: just think about how different religions have treated each other, 19th Century slavery, or even the way people prefer to give to local charities over global development.
We should be really cautious when discounting the value of other people. Time is different to space or race, but is it that different? In the past, people thought it was natural and obvious to draw moral distinctions between people on the basis of geography, religion, race or gender. There’s a risk that we might be making the same mistake when it comes to time. After all, future humans are still humans who will live, feel, cry and laugh just like we do. Wouldn’t it be awesome if we allowed this moral empathy to cross the great divide of time as well?
Imagine people in the future could look back and see how we, today, had consciously made decisions to improve their lives. It might feel like walking into a grand cathedral and knowing that the people who built it a thousand years ago knew that it would still be used for millennia. Or it might feel like what Isaac Newton called “standing on the shoulders of giants”—like when Covid vaccine developers used findings from biology and chemistry first discovered by Victorians. Our shared story on this planet would be so much richer and more beautiful if we acknowledged that just as it doesn’t matter where you were born, it shouldn’t matter when you are born.
Imagine you had a time machine. A little box that you could climb inside and use to explore past and future worlds. And so you set off to see what the future may bring. And you find a future of immense wonders: glittering domes, flying cars, and strange planets with iridescent moons. But not all is perfect. On a space station you find a child lost and separated from her family. Under a strange sun you find a soldier lying for days injured on a battlefield with no hope of help. In a virtual world you find an uploaded mind trapped alone in a blank empty cyberspace for centuries.
And imagine that your time machine comes equipped with a sonic doohickey and that you have the power to help these future strangers. Should you help them? Does it matter when they are? Does their distance from our here and now make any difference to their moral worth? Of course it does not matter. Of course you should help.
Now in real life we don’t have a time machine. So the future is distant and uncertain. And as such there are very many reasons to apply a discount rate and to lower the value we place on the future. We discount for the fact that the future will be richer than us and have resources we can only dream of. We discount for the uncertainty that our actions will have an impact as they are washed out over time. We discount for the fact that the world may end and perhaps there will be no future. We discount for ourselves if we know that we want things sooner rather than later.
But never should we discount the moral worth of future beings simply because they are in the future. There is just no case for it. Like none. I cannot think of one, philosophers around the world cannot think of one (the closest I have heard of is a rare mostly dismissed view that all beings that do not yet exist yet have zero moral worth), and I assume you cannot think of one either. People distanced in time are like us, individuals. Their tears, their helplessness, their pain and sorrow, their joy and laughter all matter. They matter.
It can seem strange to focus on the wellbeing of future people who don’t even exist yet, when there is plenty of suffering that could be alleviated today. Shouldn’t we aid the people who need help now and let future generations worry about themselves?
We can see the problems with near-sighted moral concern if we imagine that past generations had felt similarly. If prior generations hadn’t cared for the future of their world, we might today find ourselves without many of the innovations we take for granted, suffering from far worse degradation of the environment, or even devastated by nuclear war. If we always prioritize the present, we risk falling into a trap of recurring moral procrastination, where each successive generation struggles against problems that could have been addressed much more effectively by the generations before.
This is not to say there no practical reasons why it might be better to help people today. We know much more about what today’s problems are, and the future may have much better technology that make fixing their own problems much easier. But acknowledging these practical considerations needn’t lead us to believe that helping future people is inherently less worthwhile than helping the people of the present. Just as impartial moral concern leads us to equally weigh the lives of individuals regardless of race or nationality, so too should we place everyone on equal footing regardless of when they exist in time.
Note: I wrote a post recently that tries, in part, to answer this question. The post isn’t a 2 minute answer, more like a 15 minute answer, so I’ve adapted some of it below to try and offer a more targeted answer to this question.
Let’s agree that the 8 billion people alive right now have moral worth—their lives mean something, and their suffering is bad. They constitute, for the time being, our moral circle. Now, fast forward thirty years. Billions of new people have been born. They didn’t exist before, but now they do.
Should we include them in our moral imagination now, before they are even born? There are good reasons to believe we should. Thirty years ago, many who are alive today (including me!) weren’t born. But we exist now, and we matter. We have moral worth. And choices that people and societies made thirty years ago affect people who were not yet born but who have moral worth now. Our lives are made better or worse by the causal chain that links the past to the present.
Aristotle teaches us that time, by itself, is not efficacious. He’s wrong about that in some respects—in modern economies, time is enough by itself to inflate the value of currency, to allow opportunities for new policies or technology to come into existence and scale up, etc., which might influence us to believe that we should discount the future accordingly. But he’s right when applied to the moral worth of humans. The moral worth of humans existing now isn’t any less than the moral worth of humans a generation ago; for the same reason, the moral worth of humans a generation from now is just as important as humans’ moral worth right now.
Our choices now have the power to influence the future, the billions of lives that will come to exist in the next thirty years. Our choices now affect the conditions under which choices will be made tomorrow, which affect the conditions under which choices will be made next year, etc. And future people, who will have moral worth, whose lives will matter, will be affected by those choices. If we take seriously the notion that what happens to people matters, we have to make choices that respect the moral worth of people who don’t even exist yet.
Now expand your moral circle once more. Imagine the next thirty generations of people. So far, there have been roughly 7500 generations of humans, starting with the evolution of Homo Sapiens roughly 150,000 years ago. One estimate puts us at a total of just over 100 billion human beings who have ever lived. The next thirty generations of humans will bring into existence at least that many humans again. Each of these humans will have the same moral worth as you or I. Why should we discount their moral worth, simply because they are in a different spot on the timeline than we are?
If possible, we should strive to influence the future in a positive direction, because future people have just as much moral worth as we do. Anything less would be a catastrophic failure of moral imagination.
Looks like I missed this, but I wanted to try it out anyway. Maybe it’ll be useful to someone.
That’s a great question. Prioritizing is important. There are many problems and we want to fix what affects us. But let’s imagine this question differently.
It’s pretty nice having fresh, clean water to drink, right? That’s going to be true for you today, tomorrow, and next week. It’ll be true in 10 days or 10 years. You wouldn’t consider yourself as having less of a right to clean water today than you did 10 years ago, right? And you wouldn’t want people to make decisions today that take away your water tomorrow. Or decisions 10 years ago that ruined your water today.
You’ll probably also want your kids to have clean water—and their kids too. You won’t want anyone to take that away. Because no matter how far you go into the future, they will still need clean water as much as you do today. There’s always going to be someone around who needs clean water just like us. They’ll feel the same pain, joy, and thirst. And they will be just as real and valuable as our own future self. Just as we are as real and valuable as people 10 years ago. Future people have value the same way our future self does. And just as past people ensured we would have clean water, we should ensure that future people can enjoy clean water too.
If something won’t affect anyone for a long time, then we might spend less effort and prioritize more immediate problems that are causing suffering. Here it makes sense to discount future people’s needs somewhat—not because they are worth less, but because we have more time. An asteroid 100,000 years away doesn’t mean ignore starvation today. Nor does starvation today mean we should forget those who will feel hungry tomorrow.
My submission below is over the word limit by 243 words. I hope it can make up for its lack of brevity with some additional depth.
“Shouldn’t we discount the moral value of people in the future based on how far away they will exist into the future?”
Why do you think so?
“Well, it seems like we care about the future, but there are so many problems here and now. Shouldn’t we work on those first?”
Not if we accept that morality ought to be an impartial affair. If we have agreed that there should be no spacial discount rate, racial discount rate, or species discount rate, then we should be inclined to reject a temporal discount rate.
But perhaps you are partial to partiality, maybe we do have special obligations to certain people. However, we should still be suspicious of discounting moral value based on distance in time. A modest discount rate of 1% per year would imply that the life of Pamba, King of the Hattians, is worth 53,569,849,000,000,000,000 lives today. I do not know what the name of that number is, but I think we can agree that no person is worth that many lives.
“Fine,” you may reply, “but couldn’t there be other reasons we would want to discount the moral value of future people? Can we really say that anything we can do today will affect future people, even in-expectation? And don’t the benefits of interventions now compound into the future? If I build a hospital today, it will serve far more people than it would have if I built it in 200 years!”
We should be wary of this reply. I will only address the first concern directly but think the second is roughly analogous.
Let us say we can choose to bury nuclear waste in either
near P1 City, which is prone to earthquakes, but which seismologists say will not experience another earthquake for 2,000 years, after which, it will likely experience an earthquake
or we can bury it near P2 City, where earthquakes never occur
Given the long half-life on nuclear waste, we can be sure that an earthquake 2,000 or so years from now will cause the population of P1 to experience some kind of catastrophe which would have been averted by burying the nuclear waste near P2.
Suppose further that if we choose (1) the people in P2 will be pleased because their property values will not go down, but regardless of whether or not we choose (1), the present people in P1 will be just as well-off. It seems wrong that a small benefit to the people in P2 could justify a catastrophe we anticipated by choosing (1) just because it would happen much later.
We can still ‘discount’ the effects of improbable events in proportion to their improbability, but, we should not expect improbability and distance in time to correlate more than very roughly. Taking probability into account is something we already do in expected value calculations and does not amount to discounting the moral value of people based on their distance from us in time.
In the same vein, the compounding benefits of some interventions do give us a reason to favor the early implementation of interventions over the later implementations of it, but not all interventions, even among those affecting the near future, have compounding benefits. So, would we really be discounting the moral value of future people by taking this into account, or like with probability, is there some other heuristic at play?
That’s definitely a good question to ask. After all, people in the future aren’t here now, and there are a lot of problems we’re facing already. That said, I don’t think we should. I mean—do you or I have any less moral value now than the people who lived a thousand years ago? Regardless of where or when they live, the value of a human life doesn’t change. Basically, I think the default hypothesis should be “A human life is worth the same, no matter what” and we need a compelling reason to think otherwise, and I just don’t see that when it comes to future people.
There are some caveats in the real world, where things are messy. Like, if I said “Why shouldn’t we focus on people in the year 3000”, your first thought probably wouldn’t be “Because they don’t matter as much”. It’d probably be something like “How do we know we can actually do anything that’ll impact people a thousand years from now?” That’s the hard part, but that’s discounting based on chance of success, not morality. We’re not saying helping people in a thousand years is less valuable, just that it’s a lot harder to do. Still, EA definitely has some ideas. Investing money to give later can have really big compounding effects, so that the compounding has a bigger effect than our uncertainty. Imagine you could invest a thousand dollars in something that would definitely work, or ten thousand on something just as effective that was about a 50⁄50 shot. There’s a whole mode of thought called “patient philanthropy” that deals with this—I could send you a podcast episode if you’d like?
I’ve definitely leaned into the “conversational” aspect of this one—the argument is less rigorous and sophisticated than a lot of others in this post, but I’ve tried to optimise it for something I could understand in real time if someone was speaking to me, and wouldn’t have to read it twice.
Why would you discount it like that? I mean – it does make sense to discount the value of a change when it happens later in a person’s life because they do not enjoy it for so long. But for future people, it is not that they would enjoy something to a lesser extent just because they exist in the future? Unless, of course, you are assuming that –
For example, if there was a change that would have applied only to a fraction of people just before the civilization ends, then it could make sense to assign a lower moral value to it, because less persons would enjoy it.
But I think that in EA it is popular to prevent the extinction of humanity, so this is why it may not make so much sense when you would bring it up. Of course it should. For example, if the future is worsening, maybe people are increasingly more villainish, then they could be discounted more. Or, if they become less conscious – I think Jason Schukraft wrote about this—if you are interested it is the Intensity of Valenced Experience across Species post on the EA Forum.
But anyway, why are you even thinking about this are you like interested in moral value or the future? Or just like maybe population ethics?
I mean so cool but there is a lot of material that no one has actually probably reviewed in its entirety so I am not sure who to even refer you to for this.
It can be hard to care about the wellbeing of people who might be living thousands or millions of years in the future, when they seem so abstract and remote. But imagine if you were somehow living one million years in the past, a time when Homo erectus still roamed the earth. Would you say that people living in the present have less moral value than you, just because they live one million years in the future compared to you? Would you say that their joys and pains are any less worthy of moral consideration and sympathy? I remember the joys of going to my cousin’s wedding, and conversely the atmosphere of grief going to a friend’s funeral.* I don’t think what happens in the present should matter any less from the perspective of someone living a million years in the past, and likewise, the experiences that people in the distant future have matter just as much as what happens now.
Now, there are some instrumental reasons to discount moral value of people in the future which I think are quite legitimate. For example, if you think that there’s only a 70% chance that there will be any people a thousand years from now, you should apply a discount factor for that. You might also think that helping the present has more positive ripple effects to help people in the future, so you could focus on supporting people living in the present in order to help safeguard future generations. Still, we should think of the intrinsic value of people as the same regardless of when they are living.
*This sentence can be removed or left out entirely.
Good question! I think I understand where you are coming from but I don’t think we should do that. We are used to not really care or think about future people but I think there are other reasons behind it.
A very important one is that we don’t see future people or their problems, so we don’t sympathize with them like we do with the rest. We have to make an effort to picture them and their troubles. Like if we didn’t have enough in the here and now!
Another one is the odds of those futures. Any prediction is less likely to occur the further it is in the future.
And lastly, we have to take into consideration how the influence of any action diminish through time.
So it’s not the value of a person that changes if they haven’t been borned yet, but the chances of helping them. And when we decide how to use our resources we should have the two things in mind so we can calculate what is the “expected value” of every possible action and choose the one with the highest.
So why do many effective altruists want to focus in those causes if there is a discount caused by lower probabilities? Because they believe that there could be many many more people in the future than have ever existed so the value of helping them and saving them from existential risks is higher to the point of turning arround the results.
Of course that is really difficult to make good predictions and there is no concensus on how important is longtermism, but I think we should always take into account that most of the time our emotions and desires will favor short term things and won’t care about the issues they don’t see.
Good question, I’ll try to answer with a little analogy :)
We shouldn’t discount the moral value of people in the future based on how far away in time they are because if two experiences are identical, it should’t matter morally when they happen. Discounting their moral value would mean that a person 1,000 years from now who experiences pain will have their suffering treated as less significant now just because it happens in the future.
Think about someone 1,000 years ago who stubs their toe. If they applied a discount to suffering in the far future, then someone’s much worse experience today—like breaking a bone—might be considered less bad than just stubbing a toe many years ago. That’s a bit absurd, so we shouldn’t treat future people using the same reasoning that would lead past people to care less about suffering that happens today.
Some people intuitively think that when we compare moral value across time there should be a discount rate like there is with money. But the reason there’s discounting with money is that you can earn interest on a dollar you save today! So money now is literally more valuable than money later. This doesn’t apply at all in the context of joy or pain.
You might also think that because we can’t know what will happen in the future we should discount due to uncertainty—but our current question is about how we should compare two events that we know will happen, so that’s not an argument for a moral discount rate (though uncertainty is a separate and important consideration!).
That’s why a lot of EAs think we shouldn’t treat joy as less good or pain as less bad just because it’s not happening right at this moment. Does that make sense?
Question: Why Shouldn’t We Discount the Lives of Future People?
Answer: Discounting the value of the future is a natural human thing to do, and there are contexts where it makes sense. If offered twenty dollars today or in ten years it makes sense to take it now. In ten years the money won’t buy as much. But it doesn’t make sense to value human lives this way. Here’s why.
Let’s say we weigh a future person’s life at one percent less for each year in the future they exist. That doesn’t seem so unreasonable, but the discounting compounds rapidly. If Caesar had done this he would have valued the life of one of his own more than the entire current population of Earth. Most people don’t think that would be fair. We shouldn’t discount the lives of future people for the same reason we would not want past people to have discounted us. We can’t get around this by tweaking that number either. For any discount rate we imagine we can go out far enough that we are committed to making an absurd tradeoff.
We also have no reason to treat a human life like a depreciating asset. The contexts where discounting is valid are cases where the value of something falls over time or uncertainty makes the benefit less likely. This is why we can rationally discount future monetary rewards. Money loses value over time, but a person’s life does not. There is no reason to think that your life would be worth less had you been born ten years later, and the same goes for future people. Discounting future lives commits to treating a human life as if it is something that loses value over time. Since it does not, it doesn’t make sense to discount them.
This is a nice project, but as many people point out this seems a bit fuzzy for a “FAQ” question. If it’s an ongoing debate within the community, it seems unlikely to have a good 2-minute answer for the public. There’s probably a broader consensus around the idea that if you commit to any realistic discount scheme, you see that the future deserves a lot more consideration than it is getting in the public and the academic mainstreams, and I wonder whether this can be phrased as a more precise question. I think a good strategy for public-facing answers would be to compare climate change (where people often have a more reasonable rate of discount) to other existential risks
It’s a good point, there’s often cases for discounting in a lot of decisions where we’re weighing up value. It’s usually done for two reasons – one being uncertainty, so we’re less certain of stuff in the future and therefore our actions might not do what we expect or the reward we’re hoping for might not actually happen. And the second being only relevant to financial stuff, but given inflation – and that you’re likely to have more income the older you are—the money’s real value is more now than later.
The second reason doesn’t really apply here because happiness doesn’t decrease in value as you go through generations, like your happiness doesn’t matter less than your parents or grandparents did, even though $5 now means less than $5 then. The first reason is interesting because there is a lot of uncertainty in the future. And for some of our actions this means we should discount their expected effects, like they might not do what we expect, but that doesn’t mean the people itself are of less value – just that we’re not as sure how to help them. I think the actions we can be most sure of helping them are things that reduce risks in the short-term future, because if everything goes to crap or we all die that’s pretty sure to be negative for them. But uncertainty on the people themselves would look like – ‘I know how to help these guys, but I’m not sure I want to, like I’m not sure they’ll be people worth helping’. Personally I think I might care about them more, given every generation so far has had advances in the way they treat others, I like you already but I reckon I might like us even better if we’d grown up 5000 years from now!
I don’t think the right response is to directly respond to this claim. I think the right response is to ask a question aimed to identify what the crux of our disagreement is, and then to respond directly to that. In my experience of talking to many people who make this sort of claim, especially those matching the description given in the post, it is a minority who literally hold the view ‘we should have no pure rate of time preference’, instead most have some other type of reason for the intuition, which I may or may not disagree with in practice.
Would you accept answers of the form:
Question which establishes whether the claim is ‘we should have a rate of pure time preference [1]’ or ‘we have practical reasons to weight effects which are near in time more highly in our decision making, even if we are impartial consequentialists with no pure rate of time preference, for example due to uncertainty about the reliability of long-term forecasts, belief that it is impossible to reduce the probability of existential catastrophe per unit time to 0 etc. [2]’
Suggested response 1, if the person holds position [1]
Suggested response 2, or, more productively, suggested avenues for further discussion, if the person holds one of several versions of position [2]
?
I’m not promising to write such an answer in either case, and accepting answers of the above form doesn’t hugely change the probability that I’ll do so, but because I think that the approach above is the best response in this sort of situation, I think it would be great if others were encouraged to consider responses of this form.
Genuine question when you say:
Did you mean to say:
Yes, thanks!
Good question! Yes these sorts of replies are allowed and I would be excited to see them!
You’re explicitly branded as an EA organisation. When you’re communicating this answer to people, how are you going to handle the fact that different people in EA have very different views about the value of the future?
EAs seem to have different views about the value of the future in the sense that they disagree about population ethics (i.e. how to evaluate outcomes that differ in the numbers or the identities of the people involved). To my knowledge, there are no significant disagreements concerning time discounting (i.e. how much, if at all, to discount welfare on the basis of its temporal location). For example, I’m not aware of anyone who thinks that a LLIN distributed a year from now does less good than a LLIN distributed now because the welfare of the first recipient, by virtue of being more removed from the present, matters less than the welfare of the second recipient.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.
Good question!
If I were an onlooker I might be thinking “hmm looks like these people are trying to settle difficult EA questions in certain positions and are going to advertise those as the correct answers when there is still a lot of unsettled debate”
I think a good answer to the prompt would acknowledge the debate in EA and that people have different views.
I ought to clarify: For the purposes we’ll be using our FAQ for we want to be outlining and defending our urgent longtermist view. That’s why in the prompt I’m looking for answers that fall on one particular side of the view (i.e. the side that best represents the views of our organisation and goals which are urgent longtermist) (if I weren’t doing this bounty I would just be writing an answer that fell on this side myself! And I’m looking to outsource my work here)
I think this is a very different set of goals and views that the EA movement as a whole, and we’re not trying to represent those—sorry for any confusion! I should have specified more clearly what our use case of the FAQ is. For example, I think this would probably be bad as a FAQ on EA.org.
I also think that a lot of these questions will be unsettled. Nevertheless for this bounty I want people to be able to indicate their tentative best guess answer to the question in a decision relevant way without getting caught in the failure mode of just providing a survey of different views.
I think that the valuable discussion and debate over the answers to the question should continue elsewhere :)
I have now made some small clarifications to the original post. If we decide to continue with the bounty program then I’ll try and do more clarifications to our aims and why we’re doing it this way :)
[this a comment about the post/project, not an answer to the question about moral discounting]
I’m curious—when talking to people new to EA, have you heard that question a lot, in those words and terms?
I’m asking because—and I might be typical-minding here—I’d be surprised if most people who are new to longtermism have the explicit belief ‘people in the future have less moral value than people in the present’. In particular, the language of moral discounting sounds very EA-ish to me. I imagine that if you ask most people who are sceptical to longtermism ‘so do future people have less moral value than present people?‘, they’d be like ‘of course not, but [insert other argument for why it nonetheless makes more sense to focus on the present.’
(Analogously, imagine an EA having a debate with someone who thinks that we should focus on helping people in our local communities. At one point the EA says ‘so, do you think that people in other countries have less moral value than people in your community?’
I find it hard to imagine that the local-communitarian would say ‘yeah! Screw people in other countries!’ [even if from an EA perspective, their actions and beliefs would seem to entail this attitude]
I find it more likely that they would say something like ‘of course people everywhere have moral value, but it’s my job to help people in my community, and people in other countries should be helped by people in their own communities’. And they might give further reasons for why they think this.)
Seems right, I agree. Thanks for the feedback!
The view that we should discount the moral value of future people is often motivated by an analogy to discounting in financial contexts. It makes sense to discount a cash flow in proportion to how removed it is from the present, because money compounds over time and because risk increases with time. However, these are instrumental considerations for discounting the future. Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted. There are good reasons for thinking that this sort of “intrinsic discounting” is indefensible.
First, intrinsic discounting has very counterintuitive implications. Suppose a government decides to get rid of radioactive waste without taking the necessary safety precautions. A girl is exposed to this waste and dies as a result. This death is a moral tragedy regardless of whether the girl lives now or 10,000 years from now. Yet a pure discount rate of 1% implies that the death of the present girl is more than 1043 times as bad as the death of the future girl.
Second, the main argument for intrinsic discounting is that people do appear to exhibit a degree of pure time preference. But while the models discount the future exponentially, people discount the future hyperbolically. So people’s preferences do not support discounting as it is usually modeled. More fundamentally, relying on what present people prefer to decide whether the future should be discounted begs the question against opponents of discounting.
Finally, an analogy with space seems to undermine intrinsic discounting. Suppose a flight from Paris to New York crashes, killing everyone on board. Someone in New York learns about the incident and says: “To decide how much to lament this tragedy, I must first learn how far away the plane was from me when the accident occurred.” This comment seems bizarre. But it is analogous to saying that, in deciding how much to value people in the future, we first need to know how far away they are from us in time. As the philosopher Derek Parfit once remarked, “Remoteness in time has, in itself, no more significance than remoteness in space.”
[I’ve shortened the comment after noticing that it exceeded the requested length.]
Thanks for your submission Pablo :)
Is this really true?
No, I mean seriously, is this true? I’m dumb and not a philosopher.
I don’t intrinsically discount the value of people or morally relevant entities. In fact, it would take me time to even come up with reasons why anyone would discount anyone else, whether they are far away in space or time, or alien to us. Like, this literally seems like the definition of evil?
So this seems to make me really incompetent at coming up with an answer to this post.
Now, using knowledge of x-risk acquired from Youtube cartoons, there are kinds of x-risk we can’t prevent. For example, being in an alien zoo, a simulation, or a “false vacuum”, all create forms of x-risk we can’t prevent or even know.
Now, given these x-risks, the reason why we might discount the future is for instrumental reasons, and at least some of these pretty much follow economic arguments: if we think there is a 0.0001% of vacuum decay or some catastrophe that we can’t prevent or even understand, this immediately bounds the long term future.
Now, note that if we parameterize this low percentage (0.0001% or something), it’s likely we can setup some model where current program of longtermism or x-risk reduction, or even much larger, powerful versions, are fully justified, for pretty reasonable ranges of this %.
Hi Charles,
I’m happy to respond, but am reluctant to do so here, since the original post stated that “We don’t want other users discussing other people’s answers, so we will moderate away those comments.”
We often seem to value the present more than the future; but I’ll argue that present and future people deserve similar moral treatment—by contrasting individual vs. moral decision-making, and impartially regarding their interests.
I discount my own future welfare at a high rate because my decisions entail relevant opportunity costs: while wealth accumulates, health declines – as individual lives are uncertain and short. However, time passing itself doesn’t make things less important: if I somehow traveled 200 years ahead, my life wouldn’t become less valuable, just like it wasn’t worthier yesterday.
Now, morality. Though you can base your relationships on proximity, does it matter from a moral perspective if people live here or elsewhere? In one or 10^10 days? If you regard all impartially, then discounting for uncertainty should use very low rates, since (unlike me) humanity can last eons. And if you think extinction is terrible, even in a million years: Why would it be so if moral value decreased with time?
Perhaps you adopt a non-consequentialist theory, based on, e.g., reciprocity, and think future generations can’t benefit us. But our lives benefit from extended chains of cooperation characterizing our cultures and economies – we owe those who tamed the fire, and those who’ll pay our long-term debts – and from the expectation of their remaining.
We could see this as some sort of community. This idea is hard to internalize, but sometimes I almost feel it, like in Dear Theodosia: “[...] we’ll give the world to you / And you’ll blow us all away.” We usually want our descendants to surpass us. Thus, assuming they’ll behave similarly and want the same for their successors… Shouldn’t we want the same for every following generation? Love is not “transitive,” but perhaps caring should be.
Thanks for your submission Ramiro :)
We do effectively discount the value of future lives, based on our uncertainty about the future. If I’m trying to do something today that will be helpful 100 years from now, I don’t know if my efforts will actually be relevant in 100 years… I don’t even know for certain if humanity will still be around! So it’s reasonable to discount our future plans because we don’t know how the future will unfold. But that’s all just due to our own uncertainty. Philosophically speaking, it doesn’t make much sense to discount the value of future lives purely because they’re far away from us in time.
The situation with helping future generations is just like the situation of helping people who are far away. It doesn’t make much moral sense to say that someone’s life is objectively less valuable just because they’re far away. When we learn about a disaster that happened to people far away from us, it usually feels abstract and small compared to if a similar disaster struck nearby—but of course, to the people who experienced it firsthand, the experience was perfectly vivid and intense! (If we wanted to check for ourselves, we could travel there and see.) Similarly, if something is absolutely guaranteed to happen a decade from now, that feels abstract and small compared to if it was going to happen tomorrow. But eventually people will be living through it as it happens, and it’ll be perfectly vivid and real to them! (It will even feel real to us too, if we just wait around long enough!) That’s why most philosophers think it’s unjustified to discount the moral value of the future—what most people really mean by “discounting the future” is “discounting uncertainty”, and there are often better ways to do that than just applying a compounding yearly discount rate to all of eternity.
...And now, having spent my 300 word budget, some notes / follow-up:
Q: Okay, we can call it “uncertainty weighting” but isn’t that just the same thing? A: Well, it’s an important moral distinction. Also, the traditional approach of using a compounding yearly percentage works well in finance, but it starts giving strange answers in other contexts. (See Pablo’s example about how a 1% discount rate implies a huge difference between a death in 10,000 years versus a death in 20,000 years, when intuitively most people would say the two deaths are probably about equally bad.)
Q: Isn’t there so much uncertainty about the future that it’s worthless to plan for things over thousand-year timescales? A: There’s certainly a lot of uncertainty! Maybe you’re right, and the world is so complex and chaotic that it’s literally impossible to know what actions are helpful or harmful for the far future—a situation philosophers call “moral cluelessness”. On the other hand, when you actually start researching different potential actions, it seems like there are things we can do that might really help the future a lot. Reducing “existential risk” is one of the best examples: it would be really bad if everybody died in the next century or two, and human civilization ended forever. If we can help avoid going extinct, that’s something concrete we can work on in the near-term which would benefit civilization far into the future. But different experts have different opinions on whether we’re in a situation of “moral cluelessness” or not.
Q: I’ve heard that humans actually discount the future even MORE than exponentially… they discount hyperbolically! Doesn’t this show that highly valuing the present is a built-in human cultural universal, a “pure time preference”? A: Glad you brought that up! This is one of my favorite facts—hyperbolic discounting is a famous example of human irrationality and impatience, and yet it might turn out to be rational behavior after all! Exponential discounting is rational when you are dealing with a constant, known rate of risk (called a hazard rate). That’s a good approximation in some well-characterized financial situations. But in the real world, there are many times when we have no idea what the true rate of risk will be! And in these situations, when we have uncertainty about the value of the hazard rate, the math actually tells us that we should use hyperbolic discounting. (This also resolves the “death in 10K vs 20K years” paradox, as the link shows.) So, it’s not that humans are born with a “pure time preference”. As I see it, hyperbolic discounting actually reinforces the idea that what we’re really doing is rationally discounting our own uncertainty about the future, not anything about events getting intrinsically less important merely because they’re far away.
Thanks for your submission Jackson :)
(Note: this comment will probably draw heavily from The Precipice, because that’s by far the best argument I’ve heard against temporal discounting. I don’t have a copy of the book with me, so if this is close enough to that explanation you can just disqualify me :P)
In normal situations, it works well to discount things like money based off the time it takes to get them. After all, money is worth less as time goes on, due to inflation; something might happen to you, so you can’t collect the money later on; and there’s an inherent uncertainty in whether or not you’ll actually get the reward you’re promised, later. Human lives aren’t subject to inflation—pain and suffering are pain and suffering across time, whether or not there are more people.Something might happen to the world, and I agree that it’s important to discount based on that, but that discounting works out to be relatively small in the grand scheme of things. People in the long-term future are still inherently valuable because they’re people, and their collective value is very important—and thus it should be a major consideration for people living now.
There’s one thing I’ve been ignoring, and it’s something called “pure time preference,” essentially the inherent preference for having something earlier than later just because of its position in time. Pure time preference shouldn’t be applied to the long term future for one simple reason—if you tried to apply a reasonable discount rate based on it *back* to Ancient Rome, the consuls would conclude that one moment of suffering for one of their subjects was worth as much as the entire human race today suffering for their entire lives.
Basically, we should discount the moral value of people based on the catastrophe risk—the chance that the world ends in the time from now to then, and the gains we strove for won’t mean anything. (Which is a relatively small discount, all things considered, keeping substantial amounts of value in the longterm future—and gets directly reduced by working on existential risk) But it’s not fair to people in the future to discount based on anything else—like pure time preference, or inflation—because given no catastrophe until then, their lives, joys, pains, and suffering are worth just as much as people today, or people living in Ancient Rome.
Thanks for your submission!
Heavily relying on preexisting content is okay! I expect a good answer might just come from reviewing the existing literature and mashing together the content
Note: I’m happy to hear feedback via DM, if you have any :)
That’s a question that comes up a lot, and it makes sense. It’s similar to the question of how much we should care about people in a different country. After all, it feels natural to care more about people close to you. But I think if you deconstruct this moral intuition, you’ll find that we don’t actually discount the value of a human life based on where or when they live. Instead, it’s simply easier to effectively help those close to us, so we are predisposed to do that.
It’s hard to know that your intervention will actually work the way you intend, if you plan it in the future. The future is hard to predict, and circumstances might be different then. If we look to the past as an example, this is less of a problem. Do you think that a rich person in ancient Rome, 2000 years ago, saving someone of starvation, has done a better thing than someone today who would save someone of starvation? Probably not, since they did pretty much the same thing: save someone’s life.
Now sure, the person that was saved 2000 years ago possibly impacted the world a lot in this time, and probably in a good way. Measuring total impact, the Roman philanthropist achieved more, through an additional 2000 years of ripple effects. This might be a reason to help people now instead of later, but it still doesn’t mean we should value their own lives less.
So, we should be somewhat biased to help people here and now, since we know that it works and that they will in turn have a longer future to positively affect. But I think the intrinsic value of their own lives does not depend on when they live.
Thanks for your submission!
Focusing on people who’ll live centuries or millennia from now can definitely feel weird, especially when there’s so much going on in the world today that seems so pressing. But I like to think about our current situation in the context of the history that got us here. There’s a lot to commend about our world and we owe much of that to people who came before us, and at least some of them were thinking pretty selflessly about prosperity. And likewise, there’s a lot that seems like it could’ve been better if previous generations had acted with a bit more foresight. One example is the founding of the US. The founders probably could’ve served their present generation pretty well by being less thoughtful and relying on the leadership of say George Washington as a powerful executive. Instead, while it was extremely far from perfect, they deliberated pretty hard to set a different precedent and come up with a system that would be good after their political cohort was long gone. On the flip side, the greenhouse effect was first proposed in the late 19th century, and certainly by the 1970s and 1980s people knew enough about climate change to at least have started investing in greener tech. But this is the classic story of shortsightedness. And look, if I were say a philanthropist or aid org back then, I probably would’ve thought “can we really think about funding low carbon energy when there’s millions of refugees and starving people, the threat of war between the US and Soviets, etc?” But had people back then gotten the ball rolling on sensible climate policies, just think how much better our current world, let alone the next century, could have been. Ironically, a lot of the problems that probably seemed more pressing back then, like poverty or conflict, would be a lot better now if people had had longer time horizons. So it seems like if we’re interested in helping people, one solid approach could be to think about how we can be good ancestors.
Thanks for your submission!
It is understandable that we want to prioritise those who are closer to us. It’s natural, instinctive, and often helps society to function—like when parents prioritise their kids. But it can also create harmful barriers and division.
History is full of examples of humans devaluing those who seem different or distant to them: just think about how different religions have treated each other, 19th Century slavery, or even the way people prefer to give to local charities over global development.
We should be really cautious when discounting the value of other people. Time is different to space or race, but is it that different? In the past, people thought it was natural and obvious to draw moral distinctions between people on the basis of geography, religion, race or gender. There’s a risk that we might be making the same mistake when it comes to time. After all, future humans are still humans who will live, feel, cry and laugh just like we do. Wouldn’t it be awesome if we allowed this moral empathy to cross the great divide of time as well?
Imagine people in the future could look back and see how we, today, had consciously made decisions to improve their lives. It might feel like walking into a grand cathedral and knowing that the people who built it a thousand years ago knew that it would still be used for millennia. Or it might feel like what Isaac Newton called “standing on the shoulders of giants”—like when Covid vaccine developers used findings from biology and chemistry first discovered by Victorians. Our shared story on this planet would be so much richer and more beautiful if we acknowledged that just as it doesn’t matter where you were born, it shouldn’t matter when you are born.
Thanks for your submission!
Imagine you had a time machine. A little box that you could climb inside and use to explore past and future worlds. And so you set off to see what the future may bring. And you find a future of immense wonders: glittering domes, flying cars, and strange planets with iridescent moons. But not all is perfect. On a space station you find a child lost and separated from her family. Under a strange sun you find a soldier lying for days injured on a battlefield with no hope of help. In a virtual world you find an uploaded mind trapped alone in a blank empty cyberspace for centuries.
And imagine that your time machine comes equipped with a sonic doohickey and that you have the power to help these future strangers. Should you help them? Does it matter when they are? Does their distance from our here and now make any difference to their moral worth? Of course it does not matter. Of course you should help.
Now in real life we don’t have a time machine. So the future is distant and uncertain. And as such there are very many reasons to apply a discount rate and to lower the value we place on the future. We discount for the fact that the future will be richer than us and have resources we can only dream of. We discount for the uncertainty that our actions will have an impact as they are washed out over time. We discount for the fact that the world may end and perhaps there will be no future. We discount for ourselves if we know that we want things sooner rather than later.
But never should we discount the moral worth of future beings simply because they are in the future. There is just no case for it. Like none. I cannot think of one, philosophers around the world cannot think of one (the closest I have heard of is a rare mostly dismissed view that all beings that do not yet exist yet have zero moral worth), and I assume you cannot think of one either. People distanced in time are like us, individuals. Their tears, their helplessness, their pain and sorrow, their joy and laughter all matter. They matter.
Thanks for your submission!
It can seem strange to focus on the wellbeing of future people who don’t even exist yet, when there is plenty of suffering that could be alleviated today. Shouldn’t we aid the people who need help now and let future generations worry about themselves?
We can see the problems with near-sighted moral concern if we imagine that past generations had felt similarly. If prior generations hadn’t cared for the future of their world, we might today find ourselves without many of the innovations we take for granted, suffering from far worse degradation of the environment, or even devastated by nuclear war. If we always prioritize the present, we risk falling into a trap of recurring moral procrastination, where each successive generation struggles against problems that could have been addressed much more effectively by the generations before.
This is not to say there no practical reasons why it might be better to help people today. We know much more about what today’s problems are, and the future may have much better technology that make fixing their own problems much easier. But acknowledging these practical considerations needn’t lead us to believe that helping future people is inherently less worthwhile than helping the people of the present. Just as impartial moral concern leads us to equally weigh the lives of individuals regardless of race or nationality, so too should we place everyone on equal footing regardless of when they exist in time.
Thanks for your submission!
Note: I wrote a post recently that tries, in part, to answer this question. The post isn’t a 2 minute answer, more like a 15 minute answer, so I’ve adapted some of it below to try and offer a more targeted answer to this question.
Let’s agree that the 8 billion people alive right now have moral worth—their lives mean something, and their suffering is bad. They constitute, for the time being, our moral circle. Now, fast forward thirty years. Billions of new people have been born. They didn’t exist before, but now they do.
Should we include them in our moral imagination now, before they are even born? There are good reasons to believe we should. Thirty years ago, many who are alive today (including me!) weren’t born. But we exist now, and we matter. We have moral worth. And choices that people and societies made thirty years ago affect people who were not yet born but who have moral worth now. Our lives are made better or worse by the causal chain that links the past to the present.
Aristotle teaches us that time, by itself, is not efficacious. He’s wrong about that in some respects—in modern economies, time is enough by itself to inflate the value of currency, to allow opportunities for new policies or technology to come into existence and scale up, etc., which might influence us to believe that we should discount the future accordingly. But he’s right when applied to the moral worth of humans. The moral worth of humans existing now isn’t any less than the moral worth of humans a generation ago; for the same reason, the moral worth of humans a generation from now is just as important as humans’ moral worth right now.
Our choices now have the power to influence the future, the billions of lives that will come to exist in the next thirty years. Our choices now affect the conditions under which choices will be made tomorrow, which affect the conditions under which choices will be made next year, etc. And future people, who will have moral worth, whose lives will matter, will be affected by those choices. If we take seriously the notion that what happens to people matters, we have to make choices that respect the moral worth of people who don’t even exist yet.
Now expand your moral circle once more. Imagine the next thirty generations of people. So far, there have been roughly 7500 generations of humans, starting with the evolution of Homo Sapiens roughly 150,000 years ago. One estimate puts us at a total of just over 100 billion human beings who have ever lived. The next thirty generations of humans will bring into existence at least that many humans again. Each of these humans will have the same moral worth as you or I. Why should we discount their moral worth, simply because they are in a different spot on the timeline than we are?
If possible, we should strive to influence the future in a positive direction, because future people have just as much moral worth as we do. Anything less would be a catastrophic failure of moral imagination.
Thanks for your submission!
Looks like I missed this, but I wanted to try it out anyway. Maybe it’ll be useful to someone.
That’s a great question. Prioritizing is important. There are many problems and we want to fix what affects us. But let’s imagine this question differently.
It’s pretty nice having fresh, clean water to drink, right? That’s going to be true for you today, tomorrow, and next week. It’ll be true in 10 days or 10 years. You wouldn’t consider yourself as having less of a right to clean water today than you did 10 years ago, right? And you wouldn’t want people to make decisions today that take away your water tomorrow. Or decisions 10 years ago that ruined your water today.
You’ll probably also want your kids to have clean water—and their kids too. You won’t want anyone to take that away. Because no matter how far you go into the future, they will still need clean water as much as you do today. There’s always going to be someone around who needs clean water just like us. They’ll feel the same pain, joy, and thirst. And they will be just as real and valuable as our own future self. Just as we are as real and valuable as people 10 years ago. Future people have value the same way our future self does. And just as past people ensured we would have clean water, we should ensure that future people can enjoy clean water too.
If something won’t affect anyone for a long time, then we might spend less effort and prioritize more immediate problems that are causing suffering. Here it makes sense to discount future people’s needs somewhat—not because they are worth less, but because we have more time. An asteroid 100,000 years away doesn’t mean ignore starvation today. Nor does starvation today mean we should forget those who will feel hungry tomorrow.
Thank you to all who made submissions!
Our top bounty winner was Jackson Wagner https://forum.effectivealtruism.org/posts/xcqcF6ksn8rtF5BDp/usd1000-bounty-for-your-best-2-minute-answer-to-an-ea?commentId=YSgCGozR7um8BBiYW#comments
Our 2nd and 3rd prizes went to ludwigbald and Jay Bailey
My submission below is over the word limit by 243 words. I hope it can make up for its lack of brevity with some additional depth.
“Shouldn’t we discount the moral value of people in the future based on how far away they will exist into the future?”
Why do you think so?
“Well, it seems like we care about the future, but there are so many problems here and now. Shouldn’t we work on those first?”
Not if we accept that morality ought to be an impartial affair. If we have agreed that there should be no spacial discount rate, racial discount rate, or species discount rate, then we should be inclined to reject a temporal discount rate.
But perhaps you are partial to partiality, maybe we do have special obligations to certain people. However, we should still be suspicious of discounting moral value based on distance in time. A modest discount rate of 1% per year would imply that the life of Pamba, King of the Hattians, is worth 53,569,849,000,000,000,000 lives today. I do not know what the name of that number is, but I think we can agree that no person is worth that many lives.
“Fine,” you may reply, “but couldn’t there be other reasons we would want to discount the moral value of future people? Can we really say that anything we can do today will affect future people, even in-expectation? And don’t the benefits of interventions now compound into the future? If I build a hospital today, it will serve far more people than it would have if I built it in 200 years!”
We should be wary of this reply. I will only address the first concern directly but think the second is roughly analogous.
Let us say we can choose to bury nuclear waste in either
near P1 City, which is prone to earthquakes, but which seismologists say will not experience another earthquake for 2,000 years, after which, it will likely experience an earthquake
or we can bury it near P2 City, where earthquakes never occur
Given the long half-life on nuclear waste, we can be sure that an earthquake 2,000 or so years from now will cause the population of P1 to experience some kind of catastrophe which would have been averted by burying the nuclear waste near P2.
Suppose further that if we choose (1) the people in P2 will be pleased because their property values will not go down, but regardless of whether or not we choose (1), the present people in P1 will be just as well-off. It seems wrong that a small benefit to the people in P2 could justify a catastrophe we anticipated by choosing (1) just because it would happen much later.
We can still ‘discount’ the effects of improbable events in proportion to their improbability, but, we should not expect improbability and distance in time to correlate more than very roughly. Taking probability into account is something we already do in expected value calculations and does not amount to discounting the moral value of people based on their distance from us in time.
In the same vein, the compounding benefits of some interventions do give us a reason to favor the early implementation of interventions over the later implementations of it, but not all interventions, even among those affecting the near future, have compounding benefits. So, would we really be discounting the moral value of future people by taking this into account, or like with probability, is there some other heuristic at play?
Thanks for your submission!
That’s definitely a good question to ask. After all, people in the future aren’t here now, and there are a lot of problems we’re facing already. That said, I don’t think we should. I mean—do you or I have any less moral value now than the people who lived a thousand years ago? Regardless of where or when they live, the value of a human life doesn’t change. Basically, I think the default hypothesis should be “A human life is worth the same, no matter what” and we need a compelling reason to think otherwise, and I just don’t see that when it comes to future people.
There are some caveats in the real world, where things are messy. Like, if I said “Why shouldn’t we focus on people in the year 3000”, your first thought probably wouldn’t be “Because they don’t matter as much”. It’d probably be something like “How do we know we can actually do anything that’ll impact people a thousand years from now?” That’s the hard part, but that’s discounting based on chance of success, not morality. We’re not saying helping people in a thousand years is less valuable, just that it’s a lot harder to do. Still, EA definitely has some ideas. Investing money to give later can have really big compounding effects, so that the compounding has a bigger effect than our uncertainty. Imagine you could invest a thousand dollars in something that would definitely work, or ten thousand on something just as effective that was about a 50⁄50 shot. There’s a whole mode of thought called “patient philanthropy” that deals with this—I could send you a podcast episode if you’d like?
(Followup: Send them to this episode of the 80,000 Hours podcast if interested: https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/)
===
I’ve definitely leaned into the “conversational” aspect of this one—the argument is less rigorous and sophisticated than a lot of others in this post, but I’ve tried to optimise it for something I could understand in real time if someone was speaking to me, and wouldn’t have to read it twice.
Thanks for your submission!
The “globalchallenges.site” link at the start of the post was broken for me. Should it be https://www.globalchallengesproject.org/?
Why would you discount it like that? I mean – it does make sense to discount the value of a change when it happens later in a person’s life because they do not enjoy it for so long. But for future people, it is not that they would enjoy something to a lesser extent just because they exist in the future? Unless, of course, you are assuming that –
For example, if there was a change that would have applied only to a fraction of people just before the civilization ends, then it could make sense to assign a lower moral value to it, because less persons would enjoy it.
But I think that in EA it is popular to prevent the extinction of humanity, so this is why it may not make so much sense when you would bring it up. Of course it should. For example, if the future is worsening, maybe people are increasingly more villainish, then they could be discounted more. Or, if they become less conscious – I think Jason Schukraft wrote about this—if you are interested it is the Intensity of Valenced Experience across Species post on the EA Forum.
But anyway, why are you even thinking about this are you like interested in moral value or the future? Or just like maybe population ethics?
I mean so cool but there is a lot of material that no one has actually probably reviewed in its entirety so I am not sure who to even refer you to for this.
Anyway, yeah a great question though!!
Thanks for your submission!
It can be hard to care about the wellbeing of people who might be living thousands or millions of years in the future, when they seem so abstract and remote. But imagine if you were somehow living one million years in the past, a time when Homo erectus still roamed the earth. Would you say that people living in the present have less moral value than you, just because they live one million years in the future compared to you? Would you say that their joys and pains are any less worthy of moral consideration and sympathy? I remember the joys of going to my cousin’s wedding, and conversely the atmosphere of grief going to a friend’s funeral.* I don’t think what happens in the present should matter any less from the perspective of someone living a million years in the past, and likewise, the experiences that people in the distant future have matter just as much as what happens now.
Now, there are some instrumental reasons to discount moral value of people in the future which I think are quite legitimate. For example, if you think that there’s only a 70% chance that there will be any people a thousand years from now, you should apply a discount factor for that. You might also think that helping the present has more positive ripple effects to help people in the future, so you could focus on supporting people living in the present in order to help safeguard future generations. Still, we should think of the intrinsic value of people as the same regardless of when they are living.
*This sentence can be removed or left out entirely.
Thanks for your submission!
When is the deadline to submit?
I think it passed already because the “Bounty (closed)” tag was added a few hours ago.
Good question! I think I understand where you are coming from but I don’t think we should do that. We are used to not really care or think about future people but I think there are other reasons behind it.
A very important one is that we don’t see future people or their problems, so we don’t sympathize with them like we do with the rest. We have to make an effort to picture them and their troubles. Like if we didn’t have enough in the here and now!
Another one is the odds of those futures. Any prediction is less likely to occur the further it is in the future.
And lastly, we have to take into consideration how the influence of any action diminish through time.
So it’s not the value of a person that changes if they haven’t been borned yet, but the chances of helping them. And when we decide how to use our resources we should have the two things in mind so we can calculate what is the “expected value” of every possible action and choose the one with the highest.
So why do many effective altruists want to focus in those causes if there is a discount caused by lower probabilities? Because they believe that there could be many many more people in the future than have ever existed so the value of helping them and saving them from existential risks is higher to the point of turning arround the results.
Of course that is really difficult to make good predictions and there is no concensus on how important is longtermism, but I think we should always take into account that most of the time our emotions and desires will favor short term things and won’t care about the issues they don’t see.
Thanks for your submission!
Good question, I’ll try to answer with a little analogy :)
We shouldn’t discount the moral value of people in the future based on how far away in time they are because if two experiences are identical, it should’t matter morally when they happen. Discounting their moral value would mean that a person 1,000 years from now who experiences pain will have their suffering treated as less significant now just because it happens in the future.
Think about someone 1,000 years ago who stubs their toe. If they applied a discount to suffering in the far future, then someone’s much worse experience today—like breaking a bone—might be considered less bad than just stubbing a toe many years ago. That’s a bit absurd, so we shouldn’t treat future people using the same reasoning that would lead past people to care less about suffering that happens today.
Some people intuitively think that when we compare moral value across time there should be a discount rate like there is with money. But the reason there’s discounting with money is that you can earn interest on a dollar you save today! So money now is literally more valuable than money later. This doesn’t apply at all in the context of joy or pain.
You might also think that because we can’t know what will happen in the future we should discount due to uncertainty—but our current question is about how we should compare two events that we know will happen, so that’s not an argument for a moral discount rate (though uncertainty is a separate and important consideration!).
That’s why a lot of EAs think we shouldn’t treat joy as less good or pain as less bad just because it’s not happening right at this moment. Does that make sense?
Thanks for your submission!
Question: Why Shouldn’t We Discount the Lives of Future People?
Answer: Discounting the value of the future is a natural human thing to do, and there are contexts where it makes sense. If offered twenty dollars today or in ten years it makes sense to take it now. In ten years the money won’t buy as much. But it doesn’t make sense to value human lives this way. Here’s why.
Let’s say we weigh a future person’s life at one percent less for each year in the future they exist. That doesn’t seem so unreasonable, but the discounting compounds rapidly. If Caesar had done this he would have valued the life of one of his own more than the entire current population of Earth. Most people don’t think that would be fair. We shouldn’t discount the lives of future people for the same reason we would not want past people to have discounted us. We can’t get around this by tweaking that number either. For any discount rate we imagine we can go out far enough that we are committed to making an absurd tradeoff.
We also have no reason to treat a human life like a depreciating asset. The contexts where discounting is valid are cases where the value of something falls over time or uncertainty makes the benefit less likely. This is why we can rationally discount future monetary rewards. Money loses value over time, but a person’s life does not. There is no reason to think that your life would be worth less had you been born ten years later, and the same goes for future people. Discounting future lives commits to treating a human life as if it is something that loses value over time. Since it does not, it doesn’t make sense to discount them.
Thanks for your submission!
This is a nice project, but as many people point out this seems a bit fuzzy for a “FAQ” question. If it’s an ongoing debate within the community, it seems unlikely to have a good 2-minute answer for the public. There’s probably a broader consensus around the idea that if you commit to any realistic discount scheme, you see that the future deserves a lot more consideration than it is getting in the public and the academic mainstreams, and I wonder whether this can be phrased as a more precise question. I think a good strategy for public-facing answers would be to compare climate change (where people often have a more reasonable rate of discount) to other existential risks
That’s reasonable—thanks for sharing! We might try and shake it up if we do a future round; will need to think about it.
It’s a good point, there’s often cases for discounting in a lot of decisions where we’re weighing up value. It’s usually done for two reasons – one being uncertainty, so we’re less certain of stuff in the future and therefore our actions might not do what we expect or the reward we’re hoping for might not actually happen. And the second being only relevant to financial stuff, but given inflation – and that you’re likely to have more income the older you are—the money’s real value is more now than later.
The second reason doesn’t really apply here because happiness doesn’t decrease in value as you go through generations, like your happiness doesn’t matter less than your parents or grandparents did, even though $5 now means less than $5 then. The first reason is interesting because there is a lot of uncertainty in the future. And for some of our actions this means we should discount their expected effects, like they might not do what we expect, but that doesn’t mean the people itself are of less value – just that we’re not as sure how to help them. I think the actions we can be most sure of helping them are things that reduce risks in the short-term future, because if everything goes to crap or we all die that’s pretty sure to be negative for them. But uncertainty on the people themselves would look like – ‘I know how to help these guys, but I’m not sure I want to, like I’m not sure they’ll be people worth helping’. Personally I think I might care about them more, given every generation so far has had advances in the way they treat others, I like you already but I reckon I might like us even better if we’d grown up 5000 years from now!
Thanks for your submission!