Why haven’t we seen a promising longtermist intervention yet?

The word “longtermism” was coined in 2017 and discussed here on the Effective Altruism Forum at least as early as 2018.[1] In the intervening eight years, a few books on longtermism have been written, many papers have been published, and countless forum posts, blog posts, tweets, and podcasts have discussed the topic.

Why haven’t we seen a promising longtermist intervention yet? For clarity, longtermist interventions should meet the following criteria:

  • Promising: the intervention seems like a good idea and has strong evidence and reasoning to support it

  • Novel: it’s a new idea proposed since the term “longtermism” was coined in 2017 and it was first proposed by someone associated with longtermism in explicit connection to the term “longtermism”

  • Actionable: it’s something people could realistically do now or soon

  • Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns

In my view, the strongest arguments pertaining to the moral value of far future lives are arguments about existential risk. However, the philosopher Nick Bostrom’s first paper on existential risk, highlighting the moral value of the far future, was published in 2002, which is 15 years before the term “longtermism” was coined. The philosopher Derek Parfit discussed the moral value of far future lives in the context of human extinction in his 1984 book Reasons and Persons.[2] So, the origin of these ideas goes back much further than 2017. Moreover, existential risk and global catastrophic risk has developed into a small field of study of its own, and a topic that was well-known in effective altruism before 2017. For this reason, I don’t see interventions related to existential risk (or global catastrophic risk) as novel longtermist interventions.

Many of the non-existential risk-related interventions I’ve heard about are things people have been doing in some form for a very long time. General appeals to long-term thinking, as wise as they might be, do not present a novel idea. The philosophers Will MacAskill and Toby Ord coined the term “longtermism” while working at Oxford University, which is believed to be at least 929 years old. I’ve always thought it was ironic, therefore, to present long-term thinking as novel. (“You think you just fell out of a coconut tree?”)

I have seen that (at least some) longtermists acknowledge this. In What We Owe the Future, MacAskill discusses the Haudenosaunee (or Iroquois) Seventh Generation philosophy, which enjoins leaders to consider the effects of their decisions on the subsequent seven generations. MacAskill also acknowledges the California non-profit the Long Now Foundation, created in 1996, which encourages people to think about the next 10,000 years. While 10,000 years is not the usual timespan people think about, some form of long-term thinking is an ancient part of humanity.

Two proposed longtermist interventions are promoting economic growth and trying to make moral progress. These are not novel; people have been doing both for a long time. Whether these ideas are actionable is unclear, since so much effort is already allocated toward these goals. It’s also unclear whether they are genuinely longtermist. The benefits of economic growth and moral progress start paying off within one’s own lifetime, and seem to be sufficient motivation to pursue them to nearly the maximum extent.

Other projects like space exploration — besides not being a novel idea — might be promising and genuinely longtermist, but not actionable in the near term. The optimal strategy with regard to space exploration, if we’re thinking about the very long-term future, is probably procrastination. The cost of delaying a major increase in spending on space exploration for at least a few more decades, or even for the next century, is small in the grand scheme of things. There is Bostrom’s astronomical waste argument, sure — every moment we delay interstellar expansion means we can reach fewer stars in the fullness of time — but Bostrom and everyone else believed that doing well over the next century or so, and securing a path to a good future, is more important than rushing to expand into space as fast as possible. Right now, we have problems like global poverty, factory farming, pandemics, asteroids, and large volcanoes to worry about. If everything goes right, in a hundred years, we’ll be in a much better position to invest much more in space travel.

Another proposal is patient philanthropy, the idea that longtermists should set up foundations that invest donations in a stock market index fund for a century or more. The idea is to allow the wealth to compound and accumulate. There are various arguments against patient philanthropy. Patient philanthropy mathematically blows up within 500 years because the wealth concentrated in the foundations grows to a politically (and morally) unacceptable level, i.e., from 40% to 100% of all of society’s wealth. Some people define longtermism as being concerned with outcomes 1,000 years in the future or more, so an intervention that can’t continue for even 500 years maybe shouldn’t count as longtermist. It’s also unclear if this should count as an intervention in its own right. Patient philanthropy doesn’t say what the money should actually be used for, it just says that the money should be put aside indefinitely so it grow and be used later, with the decision about what to use it for and when to use it deferred indefinitely.[3]

The rationale for patient philanthropy is that the money can be used to respond to future emergencies or for exceptionally good opportunities for giving. However, it isn’t clear why patient philanthropy would be the best way to make that funding available. We saw in 2020 the huge amount of resources that societies can quickly mobilize to respond to emergencies. Normal foundations that regularly disburse funds are often already on the lookout for good opportunities; we should expect, barring catastrophe, foundations like these will exist in the future. The promisingness of patient philanthropy is, therefore, dubious.

This is the pattern I keep seeing. Every proposed longtermist intervention I’ve been able to find so far fails to meet at least one of the four criteria listed above (and often more than one). This wouldn’t be so bad if not for the way longtermism has been presented and promoted. We have been told that longtermism is bracing new idea of great moral importance, around which the effective altruist movement, philanthropy, and possibly much else besides should change course. I think it’s a wonderful thing to generate creative or provocative ideas, but the declaration that an idea is morally and practically important should not get ahead of producing some solid advice, motivated by the new idea, that is novel and actionable.

Occasionally, I’ll see someone in the wider world mention longtermism as a radical, unsettling idea. It typically seems like they’ve confused longtermism with another idea, like transhumanism. (In fairness, I’ve also seen people within the effective altruism community conflate these ideas.) As I see it, the problem with longtermism is not that it’s radical and unsettling, but that it’s boring, disappointing, overly philosophical, and insufficiently practical. If longtermism is such a radical, important idea, why haven’t we seen a promising longtermist intervention yet?

  1. ^

    Edited on December 18, 2025 at 7:10 PM Eastern to add:

    Correction, sorry. I originally said the term longtermism was first used on the EA Forum in 2017. The post I was thinking of was actually from 2019 and said the term had been coined in 2017. I changed this sentence to reflect the correct information.

    Apologies for the error.

  2. ^

    Nick Bostrom cites Derek Parfit’s argument in Reasons and Persons in his 2013 TEDxOxford talk on existential risk.

  3. ^

    If I put money in a bank account now and earmark it for “longtermist interventions”, does that, in itself — me putting the money in a bank account — count as a longtermist intervention? Or do I need to come up with a more concrete idea first?