[Linkpost] Eric Schwitzgebel: Against Longtermism
This is a linkpost for https://schwitzsplinters.blogspot.com/2022/01/against-longtermism.html
Eric Schwitzgebel, a philosophy professor at UC Riverside, just posted a criticism of longtermism on his blog. In short, his arguments are:
We live in a dangerous time in history, but there’s no reason to think that the future won’t be at least as dangerous. Thus, we’ll likely go extinct sooner rather than later, so the expected value of the future is not nearly as great as many longtermists make it out to be.
It’s incredibly hard to see how to improve the longterm. For example, should we almost destroy ourselves (e.g., begin a cataclysmic yet survivable nuclear war) to avoid the risks from even more dangerous anthropogenic threats?
Apart from temporal discounting, there are reasonable ethical positions from which one might still have greater reason to help those temporally closer than farther. For example, confucianism says we should focus more on those “closer” to us in the moral circle (friends, family, etc.) than those “farther” (including, presumably, future people).
There’s a risk that longtermism could make people ignore the plight of those currently suffering. (Although, Schwitzgebel acknowledges, prominent longtermists like Ord also work in more neartermist areas.)
Overall, the critiques don’t seem to be original. The third argument seems to me to be a reminder that it is important to examine the case for longtermism from other ethical perspectives.
If you enjoyed reading Schwitzgebel’s post, he has another EA-related post about AI alignment (as well as many posts on consciousness, e.g., in AI).
I blogged a response to Schwitzgebel’s four objections, here. But I’d welcome any suggestions for better responses!
Your reply to Eric’s fourth objection makes an important point that I haven’t seen mentioned before:
A view, of course, can be true even if defending it in public is expected to have bad consequences. But if we are going to consider the consequences of publicly defending a view in our evaluation of it, it seems we should also consider the consequences of publicly objecting to that view when evaluating those objections.
The third argument seems to represent what a lot of people actually feel about utilitarian and longtermist ethics. They refuse to take impartiality to its logical extreme, and instead remain partial to helping people that feel nearby.
From a theoretical standpoint, there are few academic philosophers who will argue against “impartiality” or some understanding that all people have the same moral value. But in the real world, just about everyone prioritizes people who are close to them: family, friends, people of the same country or background. Often this is not conceived of as selfishness — my favorite Bruce Springsteen song, “Highway Patrolman”, sings the praises of a police officer who puts family above country and allows his brother escape the law.
Values are a very human question, and there’s as much to learn from culture and media as there is from academic philosophy and logical argument. Perhaps that’s merely the realm of descriptive ethics, and it’s more important to learn the true normative ethics. Or, maybe the academics have a hard time understanding the general population, and would benefit from a more accurate picture of what drives popular moral beliefs.
Thanks for sharing this!
Quoting from the article (underline added):
The point about cooperation carrying risks is interesting and not something I’ve seen elsewhere.