Proposed tweak to the longtermism pitch

TL;DR I propose tweaking the longtermism pitch to focus on how it builds on an already-common practice of making decisions with concern for future people.

Thanks to Max Clarke, Ben Wylie-van Eerd, and Tyrone Barugh for feedback. All views are my own.

Epistemic status: This post is a recommendation based on what I feel when I read articles introducing longtermism to a general audience. I know some people feel similarly and others feel differently. I suspect this is because of differing norms in philosophy arguments vs. wider media. I am reasonably confident (75%) that the changes I propose would make longtermism seem more approachable for someone who has never heard of EA or longtermism before, and hasn’t studied philosophy. On the other hand, I believe the changes might decrease the perceived integrity of the argument to readers who seek ideas that are built from first principles, or who have studied philosophy.

Intro:

When I read intro articles on longtermism, there’s often something about the tone of the argument that bugs me. I believe there is a missed opportunity to connect with a wider range of readers. It’s also a missed opportunity to build connections and a sense of kinship with others who (at least partially) agree with longtermist values.

I make two main arguments in this post:

  1. Longtermism could be (and usually is) introduced as “caring about future people” x “the future could be extremely long and big.”

  2. “Caring about future people” is already quite a common concept and it feels condescending to imply otherwise.

Finally, I suggest one way the “intro to longtermism” pitch could be adapted to connect with a slightly wider audience.

How I interpret longtermism:

In his recent New York Times article, William MacAskill introduces longtermism as

(1) “The idea that positively influencing the long-term future is a key moral priority of our time.”

My working definition of longtermism for this post is MacAskill’s definition, plus the following two premises that are used to derive it:

(2) People who live in the future are as morally relevant as people who are alive today

(3) Humanity is extremely young when considered in comparison to the potential timeline of humanity.

What is and isn’t new about longtermism:

It seems (to me) that when people introduce longtermism, all three of these concepts are pitched as ‘new’ to the reader and are explained from the ground up. In contrast, the author could choose to pitch specific parts as being new to the reader. I say this because, in a range of non-EA communities in my life, point (2) is actually quite commonly accepted. For example, in public discourse on climate change, people talk about taking action to improve the lives of people one or two hundred years from now[1]. In many indigenous communities, decisions are made with explicit consideration of how this action will affect descendants.

When I read something that implies point (2) is new to the audience, it feels gently condescending towards these other communities. It gives the impression that the author is claiming this as a novel idea of their own, without realizing that many others also use this idea to inform their decisions. It makes me wonder how far and wide the author listens to others, and therefore whether their theories have sprung from a widely informed worldview, or from an ivory tower.

This isn’t a strong, angry reaction; it’s just a quiet question in the back of my mind that I dismiss pretty quick. In every case that I get this impression, I assume that the author doesn’t intend to make this claim at all. I’m not claiming that my impression is accurate; I’m merely trying to describe why this particular way of talking about longtermism makes me feel slightly uncomfortable. Some of my friends have expressed that they don’t get this impression at all; others have expressed that they disengage with academics precisely because they get this impression from conventions in academia. I worry that this type of over-explaining in non-academic contexts might turn some people off from engaging with the actual ideas of longtermism.

Quick aside: I do believe that longtermism is genuinely making a new argument, by asking people to consider “future people are morally relevant” in combination with “humanity is extremely young compared to our potential length of existence.” I just wince slightly when it seems that “we should care about future people” is being presented as a novel idea.

I’ve been mulling over how I might phrase it differently. I propose adding more clarification to which ideas the author is proposing as new, and being more explicit about ways that longtermism is similar to current common philosophies.

What might this look like?

I propose a slight change to the longtermism intro pitch to acknowledge that caring for future people is a common concept across many spheres of humanity. This would position longtermism as a collaboration with other philosophies, rather than a new and distinct philosophy.

For example, in the following excerpt (taken from MacAskill’s NY Times article mentioned above), the middle paragraph uses a small scenario to introduce the reader to the idea that future people count. My uncharitable reading is that the author thinks “future people count” is a new idea to the reader that should be obvious if the reader took the time to think about it.

But some simple ideas exerted a persistent force on my mind: Future people count. There could be a lot of them. And we can make their lives better. To help others as much as possible, we must think about the long-term impact of our actions.

The idea that future people count is common sense. Suppose that I drop a glass bottle while hiking. If I don’t clean it up, a child might cut herself on the shards. Does it matter when the child will cut herself — a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs.

Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don’t exist yet.

The same argument could instead be presented as follows:

But some simple ideas exerted a persistent force on my mind: Future people count. There could be a lot of them. And we can make their lives better. To help others as much as possible, we must think about the long-term impact of our actions.

We already do this, of course. We often act out of care for our descendents, or the next generation. We try to leave them a world that’s better than the one we have today. When we discuss climate change, we frame it with questions like “What will this mean for people born a hundred years from now?” The seventh generation principle and similar concepts from other indigenous cultures ask that decisions consider how people in the long term future will be impacted.

Longtermism combines this empathy for future people with a consideration for the potentially thousands, and hopefully millions of years that humanity may continue to exist. All these people will have hopes and joys and pains and regrets, just like the rest of us. The actions we collectively take today could dramatically shape of the world in which they live, and the opportunities they have to lead a fulfilling life.”

To me, this revised introduction makes it clear that longtermism is a new concept, and it builds on concepts that the reader might already be familiar with. I would anticipate that it comes across as warmer and more approachable to anyone who has a similar reaction to mine described above.

Thoughts?

  1. ^

    I know this isn’t what’s typically considered “long term” in longtermism, but is reasonably long term in the context of public discourse.