Against longtermism

Hello, it’s me again! I’ve been happily reading through the EA introductory concepts and I would like to have my say on “Longtermism”, which I read about at 80000 hours. I have also read Against the Social Discount Rate, which argues that the discount rate for future people should be zero.

Epistemic status: I read some articles and papers on the 80000 hours and effective altruism websites, and thought about whether it made sense. ~12 hours

Summary

  • Longtermism assumes that thoughtful actions now can have a big positive impact in the future. At worst, they could fail and have no effect.

  • But this is probably false.

  • In the past, all events with big positive impacts on the future occurred because people wanted to solve a problem or improve their circumstances, not because of longtermism.

  • All causes indicated by longtermism can be tackled without longtermism.

  • Longtermism is costly and has doubtful benefit. Therefore, it should be deprioritized.

Can we be effective?

Derek Parfit, who co-wrote Against the Social Discount rate, has said the following:

“Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?”

This has been quoted several times, even though it’s an absurd argument on its face. Imagine the world where Cleopatra skipped dessert. How does this cure cancer? I can think of two possibilities.

  1. Cleopatra spends the extra time saved by skipping dessert, and invents biology, chemistry, biochemistry, oncology, and an efficacious cancer treatment. I assign this close to zero probability.

  2. By skipping dessert, the saved resources cause a chain reaction that makes Egypt, at the time a client state of the Roman Republic, significantly stronger. I think this is quite unlikely.

Did you see the rhetorical sleight of hand? Parfit claimed that skipping dessert leads to a cure for cancer. We are supposed to take as axiomatic that a small sacrifice now will have benefits in the future. But in fact, we cannot assume this.

Edit: I learned in the comments that I misunderstood this example—it was a hypothetical to show that “time discount rate” is invalid. I agree time discount rate is invalid, so I don’t have anything against this example in its context. Sorry about my misunderstanding!

***

Most of the 80000 hours article attempts to persuade the reader that longtermism is morally good, by explaining the reasons that we should consider future people. But the part about how we are able to benefit future people is very short. Here is the entire segment, excerpted:

We can “impact” the future. The implicit assumption – so obvious that it’s not even stated – is that, sure, maybe we don’t know exactly how we can be the most effective. But if we put our minds to it, surely we could come up with interventions with results that range from zero (in the worst case) to positive.

The road to hell is paved with good intentions

Would this have been true in the past? I imagined what a high conviction longtermist would do at various points of time in history. Our longtermist would be an elite in the society of the time, someone with the ability to impact things. Let’s call him “Steve”. Steve adopts the values of the time he travels to, just as a longtermist in 2022 adopts the values of 2022 when deciding what benefits future people.

1960 AD

The Cold War has started, and the specter of nuclear winter is terrifying. Steve is worried about nuclear existential risk, but realizes that he has no hope of getting the United States and the Soviet Union to disarm. Instead, he focuses on what could impact people in the far future. The answer is immediately obvious: nuclear meltdowns and radioactive nuclear waste. Meltdowns can contaminate land for tens of thousands of years, and radioactive waste can similarly be dangerous for thousands of years. Therefore, Steve uses his influence to obstruct and delay the construction of nuclear power plants. Future generations will be spared the blight of thousands of nuclear power plants everywhere.

1100 AD

Steve is a longtermist in Europe. Thinking for the benefit of future humans, he realizes that he must save as many future souls as possible, so that they are able to enter heaven and enjoy an abundance of utils. What better way to do this than to reclaim the Holy Land from Islamic rule? Some blood may be shed and lives may be lost, but the expected value is strongly positive. Therefore, Steve uses his power to start the Crusades, saving many souls over the next 200 years.

50 BC

Steve is now living in Egypt. Thinking of saving future people from cancer, he convinces Cleopatra to skip dessert. Somehow, this causes Egypt to enter a golden age, and Roman rule over Europe lasts a century longer than it would have.

Unfortunately, this time Steve messed up. He forgot that the Industrial Revolution, which was the starting point for a massive upgrade of humanity’s living standards and required for cancer cures, happened due to a confluence of factors in the United Kingdom (relatively high bargaining power of labor, the Magna Carta, balance of power between nobles and the Crown, the strength of the Catholic Church). Roman domination was incompatible with all of those things, and its increased longevity actually delayed the cure for cancer by a century!

Edit: the source of the example is invalid, but I think the theme of “hard to know future outcomes” in the second paragraph is still plausible if we accept the first paragraph’s hypothesis.

***

I believe these examples show that it’s really unlikely that a longtermist at any point in history would have a good sense of how to benefit future people. Well-intentioned interventions could just as likely turn out to be harmful. I don’t see any reason why the current moment in time would be different. The world is a complex system, and trying to affect the far future state of a complex system is a fool’s errand.

In the past, what sorts of events have benefitted future humans?

Great question. The articles I read generally point to economic growth as the main cause of prosperity. Economic growth is said to increase due to technological innovation such as discoveries, social innovation such as more effective forms of government, and larger population which has a multiplier effect.

Let’s look at a few major breakpoints and see whether longtermism was a significant factor. Some examples are the discovery of fire, sedentary agriculture, invention of the wheel, invention of writing, invention of the printing press, and the industrial revolution.

  • Fire was discovered and controlled around a million years ago. While we do not have records of the inventor’s motivations, it’s likely that they were not thinking of the far future. This is because it’s mostly likely they wanted fire because it provided warmth, and could be used to cook food, both of which were a big help for survival.

  • Sedentary agriculture developed around 10000 BC, because it led to a larger amount of food production. The purpose of agriculture was to gain an edge against competing tribes, because the larger population density of agricultural tribes could defeat hunter-gatherers. The benefits to future humans were not a consideration.

  • The wheel was invented around 4000 BC, because it enabled transport of items with less effort. It is unlikely that people were motivated by the utils of far future humans, yet the invention of the wheel undoubtedly contributed to future prosperity.

  • Writing was first invented around 3400 BC in Mesopotamia. Historians believe that people invented writing in order to keep counts of assets, as a recording device, and to communicate with people at greater distances. Here, too, it is unlikely that they thought of future humans – they liked writing for its immediate benefits.

  • The printing press was invented around 1440 AD, and Wikipedia says that “The sharp rise of medieval learning and literacy amongst the middle class led to an increased demand for books which the time-consuming hand-copying method fell far short of accommodating.” The Printing Revolution, which contributed to the literacy and education flywheels, was created as “the entrepreneurial spirit of emerging capitalism increasingly made its impact on medieval modes of production, fostering economic thinking and improving the efficiency of traditional work processes.” It was not because of longtermism.

  • The industrial revolution, happening around 1760 to 1840, was really good for humanity overall. It came about because people wanted to become richer and more powerful. Longtermism is almost never mentioned as a reason for the industrial revolution.

In summary, it looks as though most advances that have benefitted the future come about because people have a problem they want to solve, or they want to increase the immediate benefits to themselves.

We can achieve longtermism without longtermism

There are examples of people taking actions that look like they require a longtermism mindset to make sense. For example:

  • An Indian man planting 1,360 acres of trees on barren land, turning it into a forest

  • A church in Spain has been under construction for 140 years (intentionally – not due to red tape) and is expected to need at least 10 more years to finish

  • The United States and the Soviet Union built spaceships to explore space, with no reward except for the dream of humanity heading to the stars

  • Energy too cheap to meter through thousands of nuclear power plants (banned by Steve)

But note, an explanation that does not include longtermism is available for all of these cases:

  • Trees only take several years to grow, and so the man could enjoy the fruits of his labor within his lifetime

  • The act of building the church itself became a focal point and a tourist attraction

  • The reason they had the space race was because of geopolitical competition, not longtermism

Longtermism is also not required for many popular causes commonly associated with it. Taking existential risks as an example:

  • Pandemic risk prevention can be justified on economic grounds or humanitarian grounds, as it pretty obviously affects current humans; we don’t need longtermism to justify working on this

  • AI risk, within the timelines proposed by knowledgeable researchers, will impact most current people in their lifetimes, or their children

  • Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already

Conclusion

The main point is that intervening for long term reasons is not productive, because we cannot assume that interventions are positive. Historically, interventions based on “let’s think long term”, instead of solving an immediate problem, have tended to be negative or negligible in effect.

Additionally, longtermism was not a motivating factor behind previous increases in prosperity. It is not necessary to tackle most current cause areas, such as existential risk. Longtermism is costly, because it reduces popular support for effective altruism through “crowding out” and “weirdness” effects.

Why do we think that longtermism, now, will have a positive effect and will be a motivating factor?

If it does not serve any useful purpose, then why focus on longtermism?