Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges.
The Far Future is Irrelevant for Moral Decision-Making
Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
Example: Slavery
We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term.Example: Existential Risk
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.
The Far Future Must Conflict with the Near Future to be Morally Relevant
For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
We Are Not in a Position to Predict the Best Actions for the Far Future
There are two main reasons for this:
Unpredictability of Future Effects
It’s nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity.Unpredictability of Future Values
Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.
Implementing Longtermism is Practically Implausible
Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.
I’m interested to hear your opinions on these challenges and how they relate to understanding longtermism.
I don’t have time to look into this in full depth, but it looks like a good paper, making useful good-faith critiques, which I very much appreciate. Note that the paper is principally arguing against ‘strong longtermism’ and doesn’t necessarily disagree with longtermism. For the record, I don’t endorse strong longtermism either, and I think that the paper delineating it which came out before any defenses of (non-strong) longtermism has been bad for the ability to have conversations about the form of the view that is much more widely endorsed by ‘longtermists’.
My main response to the points in the paper would be by analogy to cosmopolitanism (or to environmentalism or animal welfare). We are saying that something (the lives of people in of future generations) matters a great deal more than most people think (at least judging by their actions). In all cases, this does mean that adding a new priority will mean a reduction in resources going to existing priorities. But that doesn’t mean these expansions of the moral circle are in error. I worry that the lines of argument in this paper apply just as well to denying previous steps like cosmopolitanism (caring deeply about people’s lives across national borders). e.g. here is the final set of bullets you listed with minor revisions:
Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about
the far futuredistant countries.Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about
future generationspeople in distant countries in principle, our resources are constrained.Focusing on
the far futuredistant countries comes at a cost to addressingpresent-daylocal needs and crises, such as health issues and poverty.Implementing
longtermismcosmopolitanism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.What I’m trying to show here is that these arguments apply just as well to argue against previous moral circle expansions which most moral philosophers would think were major points of progress in moral thinking. So I think they are suspect, and that the argument would instead need to address things that are distinctive about longtermism, such as arguing positively that future peoples’ lives don’t matter morally as much as present people.
The “distant country” objection does not defend against the argument that “We Are Not in a Position to Predict the Best Actions for the Far Future”.
We can go to a distant country and observe what is going on there, and make reasonably informed decisions about how to help them. A more accurate analogy would be if we were trying to help a distant country that we hadn’t seen, couldn’t communicate with and knew next to nothing about.
It also doesn’t work as a counterargument for “The Far Future Must Conflict with the Near Future to be Morally Relevant”. The authors are claiming that anything that helps the far future can also be accomplished by helping people in the present. The analogous argument that anything that helps distant countries can also be accomplished by helping people in this country is just wrong.
We can make meaningful decisions about how to help people in the distant future. For example, to allow them to exist at all, to allow them to exist with a complex civilisation that hasn’t collapsed, to give them more prosperity that they can use as they choose, to avoid destroying their environment, to avoid collapsing their options by other irreversible choices, etc. Basically, to aim and giving them things near the base of Maslow’s Hierarchy of Needs or to give them universal goods — resources or options that can be traded for whatever it is they know they need at the time. And the same is often true for international aid.
In both cases, it isn’t always easy to know that our actions will actually secure these basic needs, rather than making things worse in some way. But it is possible. One way to do it for the distant future is to avoid catastrophes that have predictable longterm effects, which is a major reason I focus on that and suggest others do too.
I don’t see it as an objection to Longtermism if it recommends the same things as traditional morality — that is just as much a problem for traditional theories, by symmetry. It is especially not a problem when traditional theories might (if their adherents were careful) recommend much more focus on existential risks but in fact almost always neglect the issue substantially. If they admit that Longtermists are right that these are the biggest issues of our time and that the world should massively scale up focus and resources on them, and that they weren’t saying this before we came along, then that is a big win for Longtermism. If they don’t think it is all that important actually, then we disagree and the theory is quite distinctive in practice. Either way the distinctiveness objection also fails.
This is in tension with “We Are Not in a Position to Predict the Best Actions for the Far Future”, isn’t it?
It is rather that longtermists have not provided any examples of moral decisions that would be different if we were to consider the far future versus the near future. All current focus areas, the authors argue, can be justified by appealing to the near future.
Yeah, perhaps I am subtly misrepresenting the argument. Trying again, I interpret it as saying:
People have justified longtermism by pointing to actions that seem sensible, such as the claim that it made sense in the past to end slavery, and it makes sense currently to prevent existential risk. But both of these examples can be justified with a lot more certainty by appealing to the short term future. So in order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty.
It might help to clarify that in the article they are defining “long term future” as a scale of millions of years.
If you’re referring to the first point I would reword this to:
What is the more widely endorsed view of longtermists?
I largely agree with your “distant countries” objection. Just because something is practically implausible does not make it morally wrong, or not worthy of attention. I also think it’s not necessarily true that implementing longtermism requires radical changes to human psychology or social institutions. We need not necessarily convince every human on the planet to care about the lives of future generations, only those who might have a meaningful impact (which could be a small number).
Nevertheless, I think the other three objections that you don’t mention provide some interesting and potentially serious challenges for longtermism, perhaps for weaker forms as well.
It’s slightly odd this paper argues that:
But then also says:
I’m left uncertain if the authors are in favor of spending to address existential risk, which would of course lead to less money to address present-day suffering due to health issues and poverty.
Hi all,
Thanks to all for taking the time to discuss our paper. I don’t have time to read and comment on everything I’ve seen discussed in the forum, but I thought it would be worthwhile to comment on a few misunderstanding (some of which others have already pointed to):
Although our main focus is on strong longtermism, many things we say are relevant for weaker views as well.
Many of our arguments are related in the sense that if you bite the bullet on one, you end up in another problem.
I think there are good reason to think that the psychological constraints for distance in space and distance is time are different because distance is space can be overcome, and arguably the distance of “caring” is less today than it was in the past because distance can be reduced in so many ways, but the distance to far future individuals is for obvious reasons very different.
The difference-maker argument is not saying that we cannot do things that are good for the future, but that we don’t need to think about the far future to know that these things are good. I think this is important to keep in mind when one addresses the problem of predicting far future consequences and evaluating value weights for the far future.
Speaking for both me and Karolina, we’d be super happy if a longtermist would take the time to respond to our paper.
For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
This seems a very strange view. If we knew the future would not last long—perhaps a black hole would swallow up humanity in 200 years—then the future would not be very vast, it would have less moral weight, and aiding it would be less demanding. Would this really leave longtermism more palatable to the critics?
In the article the authors are somewhat ambiguous about the meaning of ‘near future’. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?
Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?
If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?
I’m not sure I buy the “We are not in a position to predict the best actions for the far future”
I would say the following would, in expectation, boost medical research in millions of years:
Not going extinct or becoming disempowered: if you’re extinct or completely disempowered you can’t do medical research (and of course wellbeing would be zero or low!).
Investing in medical research now: if we invest in such research now we bring forward progress. So, in theory, in millions of years we would be ahead where we would have been if we had not invested now. If there’s a point at which we plateau with medical research then we we would just reach that plateau earlier and have more time to enjoy with the highest possible level of medical research.
They will probably value:
Being alive: another argument for not going extinct.
Having the ability to do what they want: another argument for not becoming permanently disempowered. Or not to have a totalitarian regime control the world (e.g. through superintelligent AI).
Minimizing suffering: OK maybe they will like suffering who knows, but in my mind that would mean things have gone very wrong. Assuming they want to minimize suffering we should try to, for example, ensure factory farming does not spread out to other planets and therefore persist for millenia. Or advocate for the moral status of digital minds.
Perhaps that could have been worded better in my summary. It is not that we cannot predict what could boost medical research in the far future. Rather it is that we cannot predict the effect that medical research will have on the far future. For example the magnitude of the effect may be so incredibly large that it might out prioritize traditional existential risks, either because it leads to a good future, or perhaps to a bad future. Or perhaps further investments in medical research will not lead to any significant gains in the things we care about. Either way we don’t have a means of predicting how our current actions will influence the far future.
With regards to value, being alive, having the ability to do what we want, and minimizing suffering, might very well be things that people in the far future value, but they are also things that we currently value now. On the authors account therefore, these values can guide our moral decision making by virtue of being things we value now and into the near future and referencing that they will also be valued by the far future is an irrelevant extra piece of information, i.e it does no additional work in guiding our moral decision making.
FWIW I think it’s pretty unclear that something like reducing existential risk should be prioritised just based on near-term effects (e.g. see here). So I think factoring in that future people may value being alive and that they won’t want to be disempowered can shift the balance to reducing existential risk.
If future people don’t want to be alive they can in theory go extinct (this is the option value argument for reducing existential risk). The idea that future generations will want to be disempowered is pretty barmy, but again they can disempower themselves if they want to so it seems good to at least give them the option.
Thanks for linking to that research by Laura Duffy, that’s really interesting. It would have been relevant for the authors of the current article as well.
According to their analysis, spending on conservative existential risk interventions are cost competitive (within an order of magnitude) to spending on AMF. Further, compared to plausible less conservative existential risk interventions, AMF is “probably” an order of magnitude less cost-effective. Under Rethink Priorities’ estimates for welfare ranges, for cage-free campaigns and the hypothetical shrimp welfare intervention, existential risk interventions are either cost competitive, or an order of magnitude less cost-effective.
I think that actually gives some reasonable weight to the idea that existential risk can be justified without reference to the far future. Duffy used a timeline of <200 years and even then a case can be made that interventions focussing on existential risk should be prioritised. At the very least it adds level of uncertainty about the relevance of the far future in moral-decision making.
This is pretty vague. If existential risk is roughly on par with other cause areas then we would be justified in giving any amount of resources to it. If existential risk is orders of magnitude more important then we should greatly prioritize it over other areas (at least on the current margin). So factoring in the far future does seem to be very consequential here.
According to the authors of the linked article, longtermists have not convincingly shown that taking the far future in account impacts decision-making in practice. Their claim is that the burden of proof here lies for the longtermist. If the far future is important for moral decision-making then this claim needs to be justified. A surface level justification that people in the far future would want to be alive, is equally justified by reference to the near future.
You linked a quantitative attempt at answering the question of whether focus on existential risk requires priority if we consider <200 years, and the answer appears to be in the affirmative (depending on weightings). Is there a corresponding attempt at making this case using the far future as a reference point?
In order to provide a justification for preventative x-risk policies with reference to their impact on the far future we would need to compare it with the impact of other focus areas and how they would influence the far future. That is in part where the ‘We Are Not in a Position to Predict the Best Actions for the Far Future’ claim fits in because how are we supposed to do an analysis of the influence of any intervention (such as medical research, but including x-risk interventions) on people living millions of years into the future. It’s possible that if we did have that kind of predictive power, many other focus areas may turn out to be orders of magnitude more important than focus on existential risks.
The analysis I linked to isn’t conclusive on longtermism being the clear winner if only considering the short-term. Under certain assumptions it won’t be the best. Therefore if only considering the short-term, many may choose not to give to longtermist interventions. Indeed this is what we see in the EA movement where global health still reigns supreme as the highest priority cause area.
What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here). In short, significantly more value is at stake with reducing existential risk because now you care about enabling far future beings to live and thrive. If longtermism is the clear winner then we shouldn’t see a movement that clearly prioritises global health, we should see a movement that clearly prioritises longtermist causes. This would be a big shift from the status quo.
As for your final point, I think I understand what you / the authors were saying now. I don’t think we have no idea what the far future effects of interventions like medical research are. We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering. Could that be wrong—sure—but we’re just talking about expectational value. With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation. The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.
It isn’t a clear winner but neither were any of the other options and it was cost competitive.
In this thread Toby Ord has said that he and most longtermists don’t support ‘strong determinism’. Although he hasn’t elucidated what the mainstream view of longtermism is.
If all the argument amounts to is that it will be good in expectation, well we can say that about a lot of cause areas. What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.
Where is this comparison? I feel like i’m repeating myself here but in order to argue that focus on existential risk is one of the best things we can do for the far future, it needs to be compared with the effect of other focus areas on the far future. But if we are not in a position to predict how cause areas will effect the far future (including focus on x-risk), then how can we make the comparison to say that focussing on existential risk is better than any of the other causes.
Put another way, if focus on existential risk is better for the far future than medical research, we need to show that focus on medical research is worse for the far future than focus on existential risk. Since we aren’t in a position to predict the impact medical research will have on the far future, we aren’t in a position to make such a comparison. Otherwise the argument just collapses to, existential risk is probably good for the far future, so let’s focus on it.
To be clear, if every focus area received equal priority including existential risk, I don’t see how we can justify a greater investment into existential risk by arguing that it will have the highest expected value for the far future.
However all focus areas don’t receive equal priority. The analysis you linked by Duffy shows that focusing on existential risk, at least in the short term, is cost-competitive with some of the more effective focus like the AMF and perhaps in virtue of this it is more neglected than other areas. Therefore, it appears to be a worthy cause area. My concern is with how reference to the far future is being used as a justification for this cause.
It is commonly assumed a lot of interventions will likely fall prey to the “washing-out hypothesis” where the impact of the intervention becomes less significant as time goes on, meaning that the effects of actions in the near future matter more than their long-term consequences. In other words, over time, the differences between the outcomes of various actions tend to fade or “wash out.” So in practice most people would assume the long-term impact of something like medical research is, in expectation, zero.
Longtermists aim to avoid “washing out”. One way is to find interventions that steer between “attractor states”. For example, extinction is an attractor state in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time.
This is all explained better in the paper The Case for Strong Longtermism which I would recommend you read.
The washing out hypothesis is a different concern to what we are talking about here. The idea I have been discussing here is not that an intervention might become less significant as time goes on. An intervention could be extremely significant for the far future, or not significant at all. However, our ability to predict the impact of that intervention on the far future is outside our purview.
From the article:
Or perhaps the difficulty lies in the high number of causal possibilities the further we reach into the future.
In the article they compare the impact of an intervention (malaria bed nets) on the near future with the impact of an intervention (reducing x-risk from asteroids, global pandemics, AI risk) on the far future. As I said earlier, not an adequate comparison.
If we compare the positive impact of an intervention on quadrillions of people to a positive impact of an intervention on only billions of people, should we be surprised that the intervention that considers the impact on more people has a greater effect? Put another way, should we be surprised the bed net intervention has a smaller impact when we reduce the time horizon of its impact to the near future?
To this you might say, well interventions focused on malaria might have this ‘washing out’ effect. But so might interventions for reducing existential risk. For example, the intervention discussed in the paper to reduce extinction-level pandemics is to spend money on strengthening the healthcare system. Something that could easily be subject to the ‘washing out’ effect.
Nevertheless, the bed net intervention is only one intervention, and there are other interventions that could have more plausible effects on the far future which would be more adequate comparisons (if such comparisons were feasible in the first place), for example, medical research.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Let me translate that sentence: Focusing on existential risk is more beneficial for the far future than other cause areas because it increases the probability of humans being alive for an extremely long time. If it’s more beneficial, we need the relevant comparison, as per above, the relevant comparison is lacking.
Any state that isn’t very persistent. For example, an Israel-Gaza ceasefire. We could achieve it, but from history we know it’s unlikely to last very long. The fact that it is unlikely to last makes it less desirable to work towards than if we were confident it would last a long time.
The extinction vs non-extinction example is the classic attractor state example, but not the only one. Another one people talk about is stable totalitarianism. Imagine China or the US can win the race to superintelligence. Which country wins the race essentially controls the world for a very long time given how powerful superintelligence would be. So we have two different attractor states—one where China wins and has long-term control and one where the US wins and has long-term control. Longtermist EAs generally think the state where the US wins is the much better one—the US is a liberal democracy whereas China is an authoritarian state. So if we just manage to ensure the US wins we would experience the better state for a very long time, which seems very high value.
There are ways to counter this. You can argue the states aren’t actually that persistent e.g. you don’t think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn’t clearly better than the other. Or you can argue that there’s not much we can do to achieve one state over other. You touch on this last point when you say that longtermist interventions may be subject to washing out themselves, but it’s important to note that longtermist interventions often aim to achieve short-term outcomes that persist into the long-term, as opposed to long-term outcomes (I explain this better here).
Saving a life through bed nets just doesn’t seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn’t either.