[Linkpost] Eric Schwitzgebel: Against Longtermism

This is a linkpost for https://​​schwitzsplinters.blogspot.com/​​2022/​​01/​​against-longtermism.html

Eric Schwitzgebel, a philosophy professor at UC Riverside, just posted a criticism of longtermism on his blog. In short, his arguments are:

  1. We live in a dangerous time in history, but there’s no reason to think that the future won’t be at least as dangerous. Thus, we’ll likely go extinct sooner rather than later, so the expected value of the future is not nearly as great as many longtermists make it out to be.

  2. It’s incredibly hard to see how to improve the longterm. For example, should we almost destroy ourselves (e.g., begin a cataclysmic yet survivable nuclear war) to avoid the risks from even more dangerous anthropogenic threats?

  3. Apart from temporal discounting, there are reasonable ethical positions from which one might still have greater reason to help those temporally closer than farther. For example, confucianism says we should focus more on those “closer” to us in the moral circle (friends, family, etc.) than those “farther” (including, presumably, future people).

  4. There’s a risk that longtermism could make people ignore the plight of those currently suffering. (Although, Schwitzgebel acknowledges, prominent longtermists like Ord also work in more neartermist areas.)

Overall, the critiques don’t seem to be original. The third argument seems to me to be a reminder that it is important to examine the case for longtermism from other ethical perspectives.

If you enjoyed reading Schwitzgebel’s post, he has another EA-related post about AI alignment (as well as many posts on consciousness, e.g., in AI).