This is mostly addressing people who care a lot about improving the long-term future and helping life continue for a long time, and who might be tempted to call themselves “longtermist.”
But are there actual harms of identifying as a “longtermist”? I describe two in this post; it can make it harder to change your mind based on new information, and it can make your conversations and beliefs more confused by adding aspects of the group identity that you’d otherwise not have adopted as part of your individual identity.
(0) What is “longtermism”?
When people say “I’m a longtermist,” they mean something like: “I subscribe to the moral-philosophical theory of longtermism.’”
So, what is “longtermism?” The definition given in various places is “the view that positively influencing the long-term future is a key moral priority of our time.” In practice, this often relies on certain moral and empirical beliefs about the world:
Believing that future beings are morally relevant — this belief is needed in order to argue that we should put resources to helping them if we can, even if those efforts trade off against resources that could go to beings alive today (a moral belief)
Not putting a significant “pure temporal discount rate” on the moral value of future beings (a person in 1000 years is “worth” basically as much as a person today) (a moral belief)
Believing that the future is big — that the value of the future can be enormous, and thus the scope of the issue is huge (an empirical belief)
And thinking that it’s actually possible to help future beings in non-negligible ways in expectation[2] (an empirical belief)
It’s worth pointing out that most people (even those who wouldn’t call themselves “longtermist”) care about future beings. Some people disagree on (2) or (3). And lots of people in effective altruism who disagree with longtermism, I think, disagree primarily with (4): the feasibility of predictably helping the future.
Importantly, I think that while your position on 1-2 can be significantly dependent on strongly felt beliefs,[3] 3-4 are more based on facts about the world: empirical data and arguments that can be refuted.[4]
(1) Calling yourself “longtermist” bakes empirical or refutable claims into an identity, making it harder to course-correct if you later find out you’re wrong.
Rewind to 2012 and pretend that you’re an American who’s realized that, if you want to improve the lives of (current) people, you should probably be aiming your efforts at helping people in poorer countries, not the US. You’d probably not call yourself a “poor-country-ist.”[5] Instead, you might go for “effective altruist”[6] and treat the conclusion that the most effective interventions are aimed at people in developing countries as an empirical conclusion given certain assumptions (like impartiality over geographic location). If it turns out that the new most effective intervention would be to help Americans, you can pivot to that without sacrificing or fighting your identity.[7]
Some of the earliest and most influential proponents of longtermism agree that longtermism is not the for-sure-correct approach/belief/philosophy.
There are arguments that our ethical decisions should be dominated by concern for ensuring that as many people as possible will someday get to exist. I really go back and forth on how much I buy these arguments, but I’m definitely somewhere between 10% convinced and 50% convinced. So … say I’m “20% convinced” of some view that says preventing human extinction [6] is the overwhelmingly most important consideration for at least some dimensions of ethics (like where to donate), and “80% convinced” of some more common-sense view that says I should focus on some cause unrelated to human extinction.
I think you should give yourself the freedom to discover later that some or all of the arguments for longtermism were wrong.
In a recent Twitter thread, I drew a distinction between effective altruism and neoliberalism. In my mind, effective altruism is a question (“What are the best ways to improve the world?”), while neoliberalism is an answer (to a related question: “What public policies would most improve the world?”). Identities centered around questions seem epistemically safer than those centered around answers. If you identify as someone who pursues the answer to a question, you won’t be attached to a particular answer. If you identify with an answer, that identity may be a barrier to changing your mind in the face of evidence.
(2) The identity framing adds a bunch of group-belief baggage
Instead of saying something like, “together with such-and-such assumptions, the arguments for longtermism, as described in XYZ, imply YZX conclusions,” the group identity framing leads us to make statements like “longtermists believe that we should set up prediction markets, the ideal state of the world is 10-to-the-many brains in vats, AI will kill us all, etc.”
Similarly, when you don the “longtermist” hat, you might feel tempted to pick up beliefs you don’t totally buy, without fully understanding why this is happening.
It’s possible and valid to believe that future people matter, that we should make sure that humanity survives, and that this century is particularly influential — but not that AI is the biggest risk (maybe you think bio or some other thing is the greatest risk). That view should get discussed and tested on its merits but might get pushed out of your brain if you call yourself a “longtermist” and let the identity and the group beliefs form your conclusions for you.[8]
I’m not saying that there’s no such thing as a group of people we might want to describe with the term “longtermists.” I think this group probably exists and the term “longtermist” can be useful for pointing at something real.
I just don’t really think that these people should want to call themselves “longtermist,” cementing the descriptive term for a vaguely defined group of people as a personal or individual identity.
Some things you can say instead of “I’m a ‘longtermist,’” depending on what is actually true:
I think the arguments for longtermism are right.
I think that positively influencing the long-term future is a key moral priority of our time.
(This and #1 are the closest to being synonymous with or a literal translation of “I am a longtermist.”)
I try to do good chiefly by helping the long-run future.
Or: “I believe that most of my predictable positive impact on the world is via my improvements of the long-run future.”
(This is also a decently close translation of “I am a longtermist.”)
I don’t want humanity to go extinct (because there’s so much potential) and am trying to make the chances of that happening this century smaller.
I think this century might be particularly important, and I think in expectation, most of my impact comes from the world in which that’s true, so I’m going to act as if it is.
I think existential risks are real, and given the scope of the problem, think we should put most of our efforts into fighting existential risks. (And I think the biggest existential risk is the possibility of engineered pandemics.)
I believe that future beings matter, and I’m working on figuring out whether there’s anything we can or should do to help them. I’ll get back to you when I figure it out.
Thanks to Jonathan Michel and Stephen Clare for leaving comments on a draft! I’m writing in a personal capacity.
“Keep Your Identity Small” argues that when a position on a certain question becomes part of your identity, arguments about topics related to that question become less productive. The examples used are politics, religion, etc. Short excerpt: “For example, the question of the relative merits of programming languages often degenerates into a religious war, because so many programmers identify as X programmers or Y programmers. This sometimes leads people to conclude the question must be unanswerable—that all languages are equally good. Obviously that’s false: anything else people make can be well or badly designed; why should this be uniquely impossible for programming languages? And indeed, you can have a fruitful discussion about the relative merits of programming languages, so long as you exclude people who respond from identity.”
Note that this reasoning applies to other worldviews, too — even some that are widespread as identities.
One example is veganism. If you’re vegan for consequentialist reasons, then your reasoning for being vegan is almost certainly dependent on empirical beliefs which may turn out to be false. (Although perhaps the reasoning behind veganism is less speculative and more tested than that behind longtermism — I’m not sure.) If the reasoning might be wrong, it might be better to avoid having veganism as part of your identity so that you can course-correct later.
The analogy with veganism also brings up further discussion points.
One is that I think there are real benefits to calling yourself “a vegan.” For instance, if you believe that adhering to a vegan diet is important, having veganism as part of your identity can be a helpful accountability nudge. (This sort of benefit is discussed in Eric’s post, which I link to later in the body of the post.) It can also help get you social support. But I still think the harms that I mention in this document apply to veganism.
Similarly, there are probably benefits to calling yourself “a longtermist” beyond ease and speed.
A different discussion could center around the politicization of veganism, whether that’s related to its quality as a decently popular identity, and whether that has any implications for longtermism.
Against “longtermist” as an identity
This is mostly addressing people who care a lot about improving the long-term future and helping life continue for a long time, and who might be tempted to call themselves “longtermist.”
There have been discussions about how “effective altruist” shouldn’t be an identity and some defense of EA-as-identity. (I also think I’ve seen similar discussions about “longtermists” but don’t remember where.) In general, there has been a lot of good content on the effect of identities on truth-seeking conversation (see Scout Mindset or “Keep Your Identity Small”[1]).
But are there actual harms of identifying as a “longtermist”? I describe two in this post; it can make it harder to change your mind based on new information, and it can make your conversations and beliefs more confused by adding aspects of the group identity that you’d otherwise not have adopted as part of your individual identity.
(0) What is “longtermism”?
When people say “I’m a longtermist,” they mean something like: “I subscribe to the moral-philosophical theory of longtermism.’”
So, what is “longtermism?” The definition given in various places is “the view that positively influencing the long-term future is a key moral priority of our time.” In practice, this often relies on certain moral and empirical beliefs about the world:
Believing that future beings are morally relevant — this belief is needed in order to argue that we should put resources to helping them if we can, even if those efforts trade off against resources that could go to beings alive today (a moral belief)
Not putting a significant “pure temporal discount rate” on the moral value of future beings (a person in 1000 years is “worth” basically as much as a person today) (a moral belief)
Believing that the future is big — that the value of the future can be enormous, and thus the scope of the issue is huge (an empirical belief)
And thinking that it’s actually possible to help future beings in non-negligible ways in expectation[2] (an empirical belief)
It’s worth pointing out that most people (even those who wouldn’t call themselves “longtermist”) care about future beings. Some people disagree on (2) or (3). And lots of people in effective altruism who disagree with longtermism, I think, disagree primarily with (4): the feasibility of predictably helping the future.
Importantly, I think that while your position on 1-2 can be significantly dependent on strongly felt beliefs,[3] 3-4 are more based on facts about the world: empirical data and arguments that can be refuted.[4]
(1) Calling yourself “longtermist” bakes empirical or refutable claims into an identity, making it harder to course-correct if you later find out you’re wrong.
Rewind to 2012 and pretend that you’re an American who’s realized that, if you want to improve the lives of (current) people, you should probably be aiming your efforts at helping people in poorer countries, not the US. You’d probably not call yourself a “poor-country-ist.”[5] Instead, you might go for “effective altruist”[6] and treat the conclusion that the most effective interventions are aimed at people in developing countries as an empirical conclusion given certain assumptions (like impartiality over geographic location). If it turns out that the new most effective intervention would be to help Americans, you can pivot to that without sacrificing or fighting your identity.[7]
Some of the earliest and most influential proponents of longtermism agree that longtermism is not the for-sure-correct approach/belief/philosophy.
For instance, Holden Karnofsky writes:
I think you should give yourself the freedom to discover later that some or all of the arguments for longtermism were wrong.
As Eric Neyman puts it when discussing neoliberalism as an identity [bold mine]:
(2) The identity framing adds a bunch of group-belief baggage
Instead of saying something like, “together with such-and-such assumptions, the arguments for longtermism, as described in XYZ, imply YZX conclusions,” the group identity framing leads us to make statements like “longtermists believe that we should set up prediction markets, the ideal state of the world is 10-to-the-many brains in vats, AI will kill us all, etc.”
Similarly, when you don the “longtermist” hat, you might feel tempted to pick up beliefs you don’t totally buy, without fully understanding why this is happening.
It’s possible and valid to believe that future people matter, that we should make sure that humanity survives, and that this century is particularly influential — but not that AI is the biggest risk (maybe you think bio or some other thing is the greatest risk). That view should get discussed and tested on its merits but might get pushed out of your brain if you call yourself a “longtermist” and let the identity and the group beliefs form your conclusions for you.[8]
P.S.
A note of caution: the categories were made for man, not man for the categories.
I’m not saying that there’s no such thing as a group of people we might want to describe with the term “longtermists.” I think this group probably exists and the term “longtermist” can be useful for pointing at something real.
I just don’t really think that these people should want to call themselves “longtermist,” cementing the descriptive term for a vaguely defined group of people as a personal or individual identity.
Some things you can say instead of “I’m a ‘longtermist,’” depending on what is actually true:
I think the arguments for longtermism are right.
I think that positively influencing the long-term future is a key moral priority of our time.
(This and #1 are the closest to being synonymous with or a literal translation of “I am a longtermist.”)
I try to do good chiefly by helping the long-run future.
Or: “I believe that most of my predictable positive impact on the world is via my improvements of the long-run future.”
(This is also a decently close translation of “I am a longtermist.”)
I don’t want humanity to go extinct (because there’s so much potential) and am trying to make the chances of that happening this century smaller.
I think this century might be particularly important, and I think in expectation, most of my impact comes from the world in which that’s true, so I’m going to act as if it is.
I think existential risks are real, and given the scope of the problem, think we should put most of our efforts into fighting existential risks. (And I think the biggest existential risk is the possibility of engineered pandemics.)
See also: “Long-Termism” vs. “Existential Risk”
I believe that future beings matter, and I’m working on figuring out whether there’s anything we can or should do to help them. I’ll get back to you when I figure it out.
Thanks to Jonathan Michel and Stephen Clare for leaving comments on a draft! I’m writing in a personal capacity.
“Keep Your Identity Small” argues that when a position on a certain question becomes part of your identity, arguments about topics related to that question become less productive. The examples used are politics, religion, etc. Short excerpt: “For example, the question of the relative merits of programming languages often degenerates into a religious war, because so many programmers identify as X programmers or Y programmers. This sometimes leads people to conclude the question must be unanswerable—that all languages are equally good. Obviously that’s false: anything else people make can be well or badly designed; why should this be uniquely impossible for programming languages? And indeed, you can have a fruitful discussion about the relative merits of programming languages, so long as you exclude people who respond from identity.”
I’ll have a post about this soon. (Committing publicly right now!)
This is disputable, but not crucial to the argument, and I think I’m presenting the more widely held view.
How chaotic is the world, really? How unique is this century? What’s the base rate of extinction?
(I acknowledge that I’m putting very little effort into producing good names.)
(or reject identity labels like this entirely…)
Note that this reasoning applies to other worldviews, too — even some that are widespread as identities.
One example is veganism. If you’re vegan for consequentialist reasons, then your reasoning for being vegan is almost certainly dependent on empirical beliefs which may turn out to be false. (Although perhaps the reasoning behind veganism is less speculative and more tested than that behind longtermism — I’m not sure.) If the reasoning might be wrong, it might be better to avoid having veganism as part of your identity so that you can course-correct later.
The analogy with veganism also brings up further discussion points.
One is that I think there are real benefits to calling yourself “a vegan.” For instance, if you believe that adhering to a vegan diet is important, having veganism as part of your identity can be a helpful accountability nudge. (This sort of benefit is discussed in Eric’s post, which I link to later in the body of the post.) It can also help get you social support. But I still think the harms that I mention in this document apply to veganism.
Similarly, there are probably benefits to calling yourself “a longtermist” beyond ease and speed.
A different discussion could center around the politicization of veganism, whether that’s related to its quality as a decently popular identity, and whether that has any implications for longtermism.
Stephen describes related considerations in a comment.