Against longtermism
Hello, it’s me again! I’ve been happily reading through the EA introductory concepts and I would like to have my say on “Longtermism”, which I read about at 80000 hours. I have also read Against the Social Discount Rate, which argues that the discount rate for future people should be zero.
Epistemic status: I read some articles and papers on the 80000 hours and effective altruism websites, and thought about whether it made sense. ~12 hours
Summary
Longtermism assumes that thoughtful actions now can have a big positive impact in the future. At worst, they could fail and have no effect.
But this is probably false.
In the past, all events with big positive impacts on the future occurred because people wanted to solve a problem or improve their circumstances, not because of longtermism.
All causes indicated by longtermism can be tackled without longtermism.
Longtermism is costly and has doubtful benefit. Therefore, it should be deprioritized.
Can we be effective?
Derek Parfit, who co-wrote Against the Social Discount rate, has said the following:
“Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?”
This has been quoted several times, even though it’s an absurd argument on its face. Imagine the world where Cleopatra skipped dessert. How does this cure cancer? I can think of two possibilities.
Cleopatra spends the extra time saved by skipping dessert, and invents biology, chemistry, biochemistry, oncology, and an efficacious cancer treatment. I assign this close to zero probability.By skipping dessert, the saved resources cause a chain reaction that makes Egypt, at the time a client state of the Roman Republic, significantly stronger. I think this is quite unlikely.
Did you see the rhetorical sleight of hand? Parfit claimed that skipping dessert leads to a cure for cancer. We are supposed to take as axiomatic that a small sacrifice now will have benefits in the future. But in fact, we cannot assume this.
Edit: I learned in the comments that I misunderstood this example—it was a hypothetical to show that “time discount rate” is invalid. I agree time discount rate is invalid, so I don’t have anything against this example in its context. Sorry about my misunderstanding!
***
Most of the 80000 hours article attempts to persuade the reader that longtermism is morally good, by explaining the reasons that we should consider future people. But the part about how we are able to benefit future people is very short. Here is the entire segment, excerpted:
We can “impact” the future. The implicit assumption – so obvious that it’s not even stated – is that, sure, maybe we don’t know exactly how we can be the most effective. But if we put our minds to it, surely we could come up with interventions with results that range from zero (in the worst case) to positive.
The road to hell is paved with good intentions
Would this have been true in the past? I imagined what a high conviction longtermist would do at various points of time in history. Our longtermist would be an elite in the society of the time, someone with the ability to impact things. Let’s call him “Steve”. Steve adopts the values of the time he travels to, just as a longtermist in 2022 adopts the values of 2022 when deciding what benefits future people.
1960 AD
The Cold War has started, and the specter of nuclear winter is terrifying. Steve is worried about nuclear existential risk, but realizes that he has no hope of getting the United States and the Soviet Union to disarm. Instead, he focuses on what could impact people in the far future. The answer is immediately obvious: nuclear meltdowns and radioactive nuclear waste. Meltdowns can contaminate land for tens of thousands of years, and radioactive waste can similarly be dangerous for thousands of years. Therefore, Steve uses his influence to obstruct and delay the construction of nuclear power plants. Future generations will be spared the blight of thousands of nuclear power plants everywhere.
1100 AD
Steve is a longtermist in Europe. Thinking for the benefit of future humans, he realizes that he must save as many future souls as possible, so that they are able to enter heaven and enjoy an abundance of utils. What better way to do this than to reclaim the Holy Land from Islamic rule? Some blood may be shed and lives may be lost, but the expected value is strongly positive. Therefore, Steve uses his power to start the Crusades, saving many souls over the next 200 years.
50 BC
Steve is now living in Egypt. Thinking of saving future people from cancer, he convinces Cleopatra to skip dessert. Somehow, this causes Egypt to enter a golden age, and Roman rule over Europe lasts a century longer than it would have.
Unfortunately, this time Steve messed up. He forgot that the Industrial Revolution, which was the starting point for a massive upgrade of humanity’s living standards and required for cancer cures, happened due to a confluence of factors in the United Kingdom (relatively high bargaining power of labor, the Magna Carta, balance of power between nobles and the Crown, the strength of the Catholic Church). Roman domination was incompatible with all of those things, and its increased longevity actually delayed the cure for cancer by a century!
Edit: the source of the example is invalid, but I think the theme of “hard to know future outcomes” in the second paragraph is still plausible if we accept the first paragraph’s hypothesis.
***
I believe these examples show that it’s really unlikely that a longtermist at any point in history would have a good sense of how to benefit future people. Well-intentioned interventions could just as likely turn out to be harmful. I don’t see any reason why the current moment in time would be different. The world is a complex system, and trying to affect the far future state of a complex system is a fool’s errand.
In the past, what sorts of events have benefitted future humans?
Great question. The articles I read generally point to economic growth as the main cause of prosperity. Economic growth is said to increase due to technological innovation such as discoveries, social innovation such as more effective forms of government, and larger population which has a multiplier effect.
Let’s look at a few major breakpoints and see whether longtermism was a significant factor. Some examples are the discovery of fire, sedentary agriculture, invention of the wheel, invention of writing, invention of the printing press, and the industrial revolution.
Fire was discovered and controlled around a million years ago. While we do not have records of the inventor’s motivations, it’s likely that they were not thinking of the far future. This is because it’s mostly likely they wanted fire because it provided warmth, and could be used to cook food, both of which were a big help for survival.
Sedentary agriculture developed around 10000 BC, because it led to a larger amount of food production. The purpose of agriculture was to gain an edge against competing tribes, because the larger population density of agricultural tribes could defeat hunter-gatherers. The benefits to future humans were not a consideration.
The wheel was invented around 4000 BC, because it enabled transport of items with less effort. It is unlikely that people were motivated by the utils of far future humans, yet the invention of the wheel undoubtedly contributed to future prosperity.
Writing was first invented around 3400 BC in Mesopotamia. Historians believe that people invented writing in order to keep counts of assets, as a recording device, and to communicate with people at greater distances. Here, too, it is unlikely that they thought of future humans – they liked writing for its immediate benefits.
The printing press was invented around 1440 AD, and Wikipedia says that “The sharp rise of medieval learning and literacy amongst the middle class led to an increased demand for books which the time-consuming hand-copying method fell far short of accommodating.” The Printing Revolution, which contributed to the literacy and education flywheels, was created as “the entrepreneurial spirit of emerging capitalism increasingly made its impact on medieval modes of production, fostering economic thinking and improving the efficiency of traditional work processes.” It was not because of longtermism.
The industrial revolution, happening around 1760 to 1840, was really good for humanity overall. It came about because people wanted to become richer and more powerful. Longtermism is almost never mentioned as a reason for the industrial revolution.
In summary, it looks as though most advances that have benefitted the future come about because people have a problem they want to solve, or they want to increase the immediate benefits to themselves.
We can achieve longtermism without longtermism
There are examples of people taking actions that look like they require a longtermism mindset to make sense. For example:
An Indian man planting 1,360 acres of trees on barren land, turning it into a forest
A church in Spain has been under construction for 140 years (intentionally – not due to red tape) and is expected to need at least 10 more years to finish
The United States and the Soviet Union built spaceships to explore space, with no reward except for the dream of humanity heading to the stars
Energy too cheap to meter through thousands of nuclear power plants(banned by Steve)
But note, an explanation that does not include longtermism is available for all of these cases:
Trees only take several years to grow, and so the man could enjoy the fruits of his labor within his lifetime
The act of building the church itself became a focal point and a tourist attraction
The reason they had the space race was because of geopolitical competition, not longtermism
Longtermism is also not required for many popular causes commonly associated with it. Taking existential risks as an example:
Pandemic risk prevention can be justified on economic grounds or humanitarian grounds, as it pretty obviously affects current humans; we don’t need longtermism to justify working on this
AI risk, within the timelines proposed by knowledgeable researchers, will impact most current people in their lifetimes, or their children
Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already
Conclusion
The main point is that intervening for long term reasons is not productive, because we cannot assume that interventions are positive. Historically, interventions based on “let’s think long term”, instead of solving an immediate problem, have tended to be negative or negligible in effect.
Additionally, longtermism was not a motivating factor behind previous increases in prosperity. It is not necessary to tackle most current cause areas, such as existential risk. Longtermism is costly, because it reduces popular support for effective altruism through “crowding out” and “weirdness” effects.
Why do we think that longtermism, now, will have a positive effect and will be a motivating factor?
If it does not serve any useful purpose, then why focus on longtermism?
I’m quite happy that you are thinking critically about what you are reading! I don’t think you wrote a perfect criticism (see below), but the act of taking the time to write a criticism and posting it to a public venue is not an easy step. EA always needs people who are willing and eager to probe its ethical foundations. Below I’m going to address some of your specific points, mostly in a critical way. I do this not because I think your criticism is bad (though I do disagree with a lot of it), but because I think it can be quite useful to engage with newer people who take the time to write reasonably good reactions to something they’ve read. Hopefully, what I say below is somewhat useful for understanding the reasons for longtermism and what I see as some flaws in your argument. I would love for you to reply with any critiques of my response.
It doesn’t, and that’s not Parfit’s point. Parfit’s point is that if one were to employ a discount rate, Cleopatra’s dessert would matter more than nearly anything today. Since (he claims) this is clearly wrong, there is something clearly wrong with a discount rate.
Well yes, but that’s because it’s in the other pages linked there. Mostly, this has to do with thinking about whether existential risks exist soon, and whether there is anything we can do about them. That isn’t really in the scope of that article but I agree the article doesn’t show it.
That isn’t entirely true. There are some things that routinely affect the far future of complex systems. For instance, complex systems can collapse, and if you can get them to collapse, you can pretty easily affect its far future. If it’s about to collapse due to an extremely rare event, then preventing that collapse can affect its far future state.
Obviously, it wasn’t. But of course it wasn’t! There wasn’t even longtermism at all, so it wasn’t a significant factor in anyone’s decisions. Maybe you are trying to say “people can make long term changes without being motivated by longtermism.” But that doesn’t say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.
I generally agree with this and so do many others. For instance see here and here. However, I think it’s possible that this may not be true at some time in the future. I personally would like to have longtermism around, in case there is really something where it matters, mostly because I think it is roughly correct as a theory of value. Some people may even think this is already the case. I don’t want to speak for anyone, but my sense is that people who work on suffering risk are generally considering longtermism but don’t care as much about existential risk.
First, I agree that interventions may be negative, and I think most longtermists would also strongly agree with this. In terms of whether historical “long term” interventions have been negative, you’ve asserted it but you haven’t really shown it. I would be very interested in research on this; I’m not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at “the hinge of history” where longtermism is especially useful.
I made some distinguishment between theory of value and theory of action. A theory of value (or axiology) is a theory about what states of the world are most good. For instance, it might say that a world with more happiness, or more justice, is better than a world with less. A theory of action is a theory about what you should do; for instance, that we should take whichever action produces the maximum expected happiness. Greaves and MacAskill make the case for longtermism as both. But it’s possible you could imagine longtermism as a theory of value but not a theory of action.
For instance, you write:
Various philosophers, such as Parfit himself, have suggested that for this reason, many utilitarians should actually “self-efface” their morality. In other words, they should perhaps start to believe that killing large numbers of people is bad, even if it increases utility, because they might simply be wrong about the utility calculation, or might delude themselves into thinking what they already wanted to do produces a lot of utility. I gave some more resources/quotes here.
Thanks for writing!
Thanks ThomasWoodside! I noticed the forum has relatively low throughput so I decided to “learn in public” as it were :)
I understand the Cleopatra paragraph now and I’ve edited my post. I wasn’t able to understand his point before, so I got it wrong. Thanks for explaining it!
This is a good point. I wanted to show “longtermism is not necessary for long term changes”, which I think is pretty likely. The more venturesome idea is “longtermism would not make better long term changes”, and those examples don’t address that point.
My intuition is that a longtermism mindset likely would not have a significant positive impact (such as the imaginary examples I wrote), but it’s pretty hard to “prove” that because we don’t have a counterfactual history. We could go through historical examples of people with longterm views (in journals and diaries?), and see whether they had positive or negative impact. That might be a big project though.
These are really good links, thank you!
Same! I agree this is a weakness of my post. Theory of action vs theory of value is a good concept—I don’t have a strong view on longtermism as a theory of value, I mostly care about the theory of action.
Much of the mobilization against nuclear risk from the 1940s onwards was explictly grounded in the threat of human extinction — from the Russell-Einsten manifesto to grassroots movements like Women Strike for Peace with the slogan “End the Arms Race not the Human Race”
Concern about the threat of human extinction is not longtermism (see Scott Alexander’s well known forum post about this), which I think is the point that the OP is making.
Yes, exactly—it’s grounded in concern about human extinction, not longtermism. The section “We can achieve longtermism without longtermism” in my posts talks about the difference.
Thanks for writing—I skimmed so may have missed things, but I think these arguments have significant weaknesses, e.g.:
They draw a strong conclusion about major historical patterns just based on guesswork about ~12 examples (including 3 that are explicitly taken from the author’s imagination).
They do not consider examples which suggest long-term thinking has been very beneficial.
E.g. some sources suggest that Lincoln had long-term motivations for permanently abolishing slavery, saying, “The abolition of slavery by constitutional provision settles the fate, for all coming time, not only of the millions now in bondage, but of unborn millions to come—a measure of such importance that these two votes must be procured.”
As another comment suggests, the argument does not consider ways in which our time might be different (e.g. unusually many people are trying to have long-term impacts, people are less ignorant, tech advances may create rare opportunities for long-term impact).
Another example of long-term thinking working well is Ben Franklin’s bequests to the cities of Boston and Philadelphia, which grew for 200 years before being cashed out. (Also one of the inspirations for the Patient Philanthropy Fund.)
Thank you, this is a great example of longtermism thinking working out, that would have been unlikely to happen without it!
To your Lincoln example I’d add good governance attempts in general—the US constitution appears to have been written with the express aim of providing long term democratic and stable government.
Thanks for adding this as an additional example—the US constitution is a very good example of how longtermism can achieve negative results! There’s a growing body of research from political scientists that the constitution is a major cause of a lot of US governance problems, for example here.
I think the slavery example is a strong example of longtermism having good outcomes, and it probably increased the amount of urgency to reduce slavery.
My base rate for “this time it’s different” arguments are low, except for ones that focus on extinction risk. Like if you mess up and everyone dies, that’s unrecoverable. But for other things I am skeptical.
Re Cleopatra:
The argument is not that Cleopatra’s action is the beginning of a causal chain. In fact, the present and the future need not be linked causally at all for Parfit’s argument to make sense.
Instead, what he employs is a “reductio ad absurdum”—he takes the non-longtermist position to an extreme where it has counterintuitive implications.
If discounting WAS true, then any of Cleopatra’s actions (even something insignificant as eating dessert) would’ve mattered so much more than anything that happens today (including curing cancer). This seems counterintuitive to most of us. Therefore, something is wrong with this kind of discounting.
Here’s a parallel argument.
The problem with both arguments is that the point of an ideology like EA or longtermism is to increase the likelihood that people take actions to make big positive impacts in the future. The printing press, the wheel, and all good things of the past occurred without us having values of human rights, liberalism, etc. This is not an argument for why these beliefs don’t matter.
It is, however, an argument for why we should normally look beyond EA to find people/organizations/opportunities for solving big problems.
Yes, if the post was simply arguing that we should look beyond longtermism for opportunities to solve big problems it would have more validity. As it stands the argument is a non sequitur.
Valid—basically I was doing a two part post. First part is “longtermism isn’t a necessary condition”, because I thought there would be pushback to that. If we accept this, then we consider the second part, “longtermism may not have a positive effect as assumed”. If I knew the first part was uncontroversial I would have cut it out.
Rhetorically that just seems strange with all your examples. Human rights are also not a “necessary condition” by your standard, since good things have technically happened without them. But they are practically speaking a necessary condition for us to have strong norms of doing good things that respect human rights, such as banning slavery. So I think this is a bait-and-switch with the idea of “necessary condition”.
What do you think would be a good way to word it?
One of the ideas is that longtermism probably does not increase the EV of decisions made for future people. Another is that we increase the EV of future people as a side effect of normal doing things. The third is that increasing the EV of future people is something we should care about.
If all of these are true, then it should be true that we don’t need longtermism, I think?
Yes, if you showed that longtermism does not increase the EV of decisions for future people relative to normal doing things, that would be a strong argument against longtermism.
Some comments on “the road to hell is paved with good intentions”
This podcast is kind of relevant: Tom Moynihan on why prior generations missed some of the biggest priorities of all − 80,000 Hours (80000hours.org)
So people in the Middle Ages believed that the best thing was to save more souls, but I don’t think that exactly failed. That is, if a man’s goal was to have more people believe in Christianity, and he went with sincerity in the Crusades or colonial missionary expeditions, he probably did help achieve that goal.
Likewise, for people in the 1700s, 1800s and early 1900s, when the dominant paradigm shifted to one of human progress, I think people could reliably find ways to improve long-term progress. New science and technology, liberal politics, etc all would have been straightforward and effective methods to get humanity further on the track of rising population, improved quality of life, and scientific advancement.
Point is, I think people have always tended to be significantly more right than wrong about how to change the world. It’s not too too hard to understand how one person’s actions might contribute to an overriding global goal. The problem is in the choice of such an overriding paradigm. The first paradigm was that the world was stagnant/repetitive/decaying and just a prelude to the afterlife. The second paradigm was that the world is progressing and things will only get steadily better via science and reason. Today we largely reject both these paradigms, and instead we have a view of precarity—that an incredibly good future is in sight but only if we proceed with caution, wisdom, good institutions and luck. And I think the deepest risk is not that we are unable to understand how to make our civilization more cautious and wise, but that this whole paradigm ends up being wrong.
I don’t mean to particularly agree or disagree with your original post, I just think this is a helpful clarification of the point.
I like this description of your viewpoint a lot! The entire paradigm for “good outcomes” may be wrong. And we are unlikely to be aware of our paradigm due to “fish in water” perspective problems.
Interesting write-up, thanks!
Elsewhere in his article, Parfit discusses probability discounting, which Parfit demonstrates does not always correlate with time-based discounting. I think probability discounting is closer to the intuition you have regarding the irrelevance of Cleopatra’s spending on dessert vs whether cures for cancer exist now.
Parfit’s example involving Cleopatra is meant to show that, for example, spending on Cleopatra’s dessert would be ranked by a policy-maker of her government as having higher value than spending whose higher value (curing cancer) would not accrue until presumably thousand’s of years later, if the policymaker used temporal discounting to make the comparison .
Your line of argument against longtermism, that today’s actions have increasingly uncertain far-future outcomes, might coincide with the belief that probability discounting is a good thing even though time-based discounting is, as Parfit claims, nonsense. However, I see a dilemma in that, assuming that Cleopatra’s government expected her empire to continue indefinitely, its policymakers could allocate money during Cleopatra’s time toward identifying and curing human disease over the long-term. It would be a reasonable expectation on the policymakers’ part that the empire’s chances of discovering cures for cancer would go up, not down, over a few thousand years.
Agreed, “probability discounting” is the most accurate term for this. Also, I struck out the part about Cleopatra in the original post, now that I understand the point behind it!
Here is the report (at first I’d been unable to find it)
At this section of my policy platform I have compiled sources with all the major arguments I could find regarding nuclear power. Specifically, under the heading “Fission power should be supported although it is expensive and not necessary”
https://happinesspolitics.org/platform.html#cleanenergy
I think with this compilation of pros/cons, and a background understanding that fossil fuel use is harmful, it is easy to see that nuclear is at least better than using fossil fuels.
I recall the Founder’s Pledge report on climate change some years ago discussed nuclear proliferation from nuclear energy and it seemed like nuclear power plants could equally promote proliferation or work against it (the latter by using up the supply of nuclear fuel). Considering how many lives have been taken by fossil fuels, I feel it’s clear that nuclear energy has been net good. That said I have a hard time believing that a longtermist in the 1960s would oppose nuclear power plants.
Not that I disagree with the general idea that if you imagine longtermists in the past, they could have come up with a lot of neutral or even harmful ideas.
I think you’re right that we can make a good case for increased spending on nuclear safety, pandemic preparedness, and AI safety without appeal to longtermism. But here’s one useful purpose of longtermism: only the longtermist arguments suggest that those causes are overwhelmingly important; and because of the longtermist arguments, we have many talented people are working zealously to solve those issues—people who would otherwise be working on other things.
Obviously this doesn’t address your concern that longtermism is incorrect; it’s merely a reason why, if longtermism is correct, it’s a useful thing to talk about.
Out of curiosity: The phrase “Past performance is not indicative of future results” is often brought up when doing the kind of historic analysis you are presenting.
How much do you think this applies here? Would things look different if we had an Effective Altruism Movement centuries ago?
Effective Altruism Movements in the past could have a wide range of results. For example, the Fabian Society might be an example of a positive impact. In the same time period, Communism would be another output of such a movement.
I think past performance is generally indicative of future results. Unless you have a good reason to think that ‘this time is different’, and you have a thesis for why the differences will lead to a materially changed outcome, it’s better to use the past as the base case.
I just found this forum post which is talking about the same ballpark of things! Mostly agree with the forum post too.
Only read the summary. I agree.