Towards a Weaker Longtermism

A key (new-ish) proposition in EA discussions is “Strong Longtermism,” that the vast majority of the value in the universe is in the far future, and that we need to focus on it. This far future is often understood to be so valuable that almost any amount of preference for the long term is justifiable.

In this brief post, I want to argue that this strong claim is unnecessary compared to a weaker argument, creates new problems that are easily avoided otherwise, and should be replaced with the weaker claim. (I am far from the first to propose this.)

The ‘regular longtermism’ claim, as I present it, is that we should assign approximately similar value to the long term future as we do to the short-term. This is a philosophically difficult position which nonetheless, I argue, is superior to either status quo, or strong longtermism.

Philosophical grounding

The typical presentation of longtermism is that if we do not discount future lives exponentially, almost any weight placed on the future, which almost certainly can be massively larger than the present, will overwhelm the value of the present. This is hard to justify intuitively—it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term risks.

The typical alternative is presented by naïve economic discounting, which assumes that we should exponentially discount the far future at some finite rate. This leads to claims that a candy bar today is worth more than the entire future of humanity starting in, say, 10,000 years. This is also hard to justify intuitively.

A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future. This preserves both the value of the long-term future of humanity if positive, and the preference for the present. Lacking any strong justification for setting the balance, I will very tentatively claim they should be weighted approximately equally, but this is not critical—almost any non trivial weight on the far future would be a large shift from the status quo towards longer-term thinking. This may be non-rigorous, but has many attractive features.

The key question, it seems, is whether the new view is different, and/​or whether the exact weights for the near and long term will matter in practice.

Does ‘regular longtermism’ say anything?

Do the different positions lead to different conclusions in the short term? If they do not, there is clearly no reason to prefer strong longtermism. If they do, it seems that almost all of these differences are intuitively worrying. Strong longtermism implies we should engage in much larger near term sacrifices, and justifies ignoring near-term problems like global poverty, unless they have large impacts on the far future. Strong neartermism, AKA strict exponential discounting, implies that we should do approximately nothing about the long term future.

So, does regular longtermism suggest less focus on reducing existential risks, compared to the status quo? Clearly not. In fact, it suggests overwhelmingly more effort should be spent on avoiding existential risk than is currently available for the task. It may suggest less effort than strong longtermism, but only to the extent that we have very strong epistemic reasons for thinking that very large short term sacrifices are effective.

What now?

I am unsure that there is anything new in this post. At the same time, it seems that the debate has crystallized into two camps which I strongly disagree with—the “anti-longtermist” camp, typified by Phil Torres, who is horrified by the potentially abusive view of longtermism, and Vaden Masrani, who wrote a criticism of the idea, versus the “strong longtermism” camp, typified by Toby Ord and (Edit: see Toby’s comment) Will MacAskill, (Edit: See Will’s comment.) who seems to imply that Effective Altruism should focus entirely on longtermism. (Edit: I should now say that it turns out that this is a weak-man argument, but also note that several commenters explicitly say they embrace this viewpoint.)

Given the putative dispute, I would be very grateful if we could start to figure out as a community whether the strong form of longtermism is a tentative question about how to work out a coherent position that doesn’t have potentially worrying implications, or if it is intended as a philosophical shibboleth. I will note that my typical mind fallacy view is that both sides actually endorse, or at least only slightly disagree with, my mid-point view, but I may be completely wrong.

  1. Note that Will has called this “very strong longtermism”, but it seems unclear how a line is drawn between very strong and strong forms. This is true especially because the definition-based version he proposes, that human lives in the far future are equally valuable and should not be discounted, seems to lead directly to this very strong longtermist conclusion.

  2. (Edited to add:) In contrast, any split of value between near-term and long-term value completely changes the burden of proof for longtermist interventions. As noted here, given strong longtermism, we would have a clear case for any positive-expectation risk reduction measure, and the only possible response to refute it is a claim that the expectation in terms of reduced risk is negative. With a weaker form, we can perform cost-benefit analysis to decide whether the loss in the near-term is worthwhile.