Why I am probably not a longtermist

tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.

I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.

This post is about the worldview called longtermism. It does not describe a position on cause prioritisation. It is very possible for causes commonly associated with longtermism to be relevant under non-longtermist considerations.

I structured this post by crux and highlighted what kind of evidence or arguments would convince me that I am wrong, though I am keen to hear about others which I might have missed! I usually did not investigate my cruxes thoroughly. Hence, only ‘probably’ not a longtermist.

The quality of the long-term future

1. I find many aspects of utilitarianism uncompelling.

You do not need to be a utilitarian to be a longtermist. But I think depending on how and where you differ from total utiliarianism, you will probably not go ‘all the way’ to longtermism.

I very much care about handing the world off in a good state to future generations. I also care about people’s wellbeing regardless of when it happens. What I value less than a total utilitarian is bringing happy people into existence who would not have existed otherwise. This means I am not too fussed about humanity’s failure to become much bigger and spread to the stars. While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks (but I very much care about its short-term impact), although that depends on how good and long I expect the future to be (see below).

What would convince me otherwise:

I not only care about pursuing my own values, but I would like to ensure that other people’s reflected values are implemented. For example, if it turned out that most people in the world really care about increasing the human population in the long term, I would prioritise it much more. However I am a bit less interested in the sum of individual preferences, but more the preferences of a wide variety of groups. This is to give more weight to rarer worldviews as well as not rewarding one group outbreeding the other or spreading their values in an imperialist fashion.

I also want to give the values of people who are suffering the most more weight. If they think the long-term future is worth prioritising over their current pain, I would take this very seriously.

Alternatively, convincing me of moral realism and the correctness of utilitarianism within that framework would also work. So far I have not seen a plain language explanation of why moral realism makes any sense, but it would probably be a good start.

If the world suddenly drastically improved and everyone had as good a quality of life as my current self, I would be happy to focus on making the future big and long instead of improving people’s lives.

2. I do not think humanity is inherently super awesome.

A recurring theme in a lot of longtermist worldviews seems to be that humanity is wonderful and should therefore exist for a long time. I do not consider myself a misanthrope, I expect my views to be average for Europeans. Humanity has many great aspects which I like to see thrive.

But I find the overt enthusiasm for humanity most longtermists seem to have confusing. Even now, humanity is committing genocides, letting millions of people die of hunger, enslaving and torturing people as well as billions of factory-farmed animals. I find this hard to reconcile with a “humanity is awesome” worldview.

A common counterargument to this seems to be that these are problems, but we have just not gotten around to fixing them yet. That humans are lazy, not evil. This does not compel me. I not only care about people living good lives, I also care about them being good people. Laziness is no excuse.

Right now, we have the capacity to do more. Mostly, we do not. Few people who hear about GiveWell recommended charities decide to donate a significant amount of their income. People go on tourist intercontinental flights despite knowing about climate change. Many eat meat despite having heard of conditions on factory farms. Global aid is a tiny proportion of most developed countries’ budgets. These examples are fairly cosmopolitan, but I do not consider this critical.

Taken one at a time, you can quibble with these examples. Sometimes people actually lack the information. They can have empirical disagreements or different moral views (e.g. not considering animals to be sentient). Sometimes they triage and prioritise other ways of doing good. I am okay with all of these reasons.

But in the end, it seems to me that many people have plenty of resources to do better and yet there are still enormous problems left. It is certainly great if we set up better systems in the future to reduce misery and have the right carrots and sticks in place to get people to behave better. But I am unenthusiastic about a humanity which requires these to behave well.

This also makes me reluctant to put a lot of weight on helping people being good regardless of when it happens. This is only true if people in the future are as morally deserving as people are today.

Or putting this differently: if humans really were so great, we would not need to worry about all these risks to the future. They would solve themselves.

What would convince me otherwise:

I would be absolutely thrilled to be wrong about how moral people are where I live! Admittedly, I find it hard to think of plausible evidence as it seems to be in direct contradiction to the world I observe. Maybe it is genuinely a lack of information that stops people from acting better, as e.g. Max Roser from Our World in Data seems to believe. Information campaigns having large effects would be persuasive.

I am unfamiliar with how seriously people take their moral obligations in other places and times. Maybe the lack of investment I see is a local aberration.

Even though this should not have an impact on my worldview, I would probably also feel more comfortable with the longtermist idea if I saw a stronger focus on social or medical engineering to produce (morally) better people within the longtermist community.

3. I am unsure whether the future will be better than today.

In many ways, the world has gotten a lot better. Extreme poverty is down and life expectancy is up. Fewer people are enslaved. I am optimistic about these positive trends continuing.

What I feel more skeptical of is how much of the story these trends tell. While probably most people agree that having fewer people starve and die young is good, there are plenty of trends which get lauded by longtermists which others might feel differently about, for example the decline in religiosity. Or they can put weight on different aspects. Someone who values animals in factory farms highly might not think the world has improved.

I am concerned that seeing the world as improving is dependent on a worldview with pretty uncommon values. Using the lens of Haidt’s moral foundations theory it seems that most of the improvements are in the Care/​harm foundation, while the world may not have improved according to other moral foundations like Loyalty/​betrayal or Sanctity/​degradation.

Also, many world improvements I expect to peter out before they become negative. But I am worried that some will not. For example, I think increased hedonism and individualism have both been a good force, but if overdone I would consider them to make the world worse, and it seems to me we are either almost or already there.

I am generally concerned about trends to overshoot their original good aim by narrowly optimising too much. Optimising for profit is the clearest example. I wrote a bit more about this here.

If the world is not better than it was in the past, extrapolating towards expecting an even better future does not work. For me this is another argument on wanting to focus on making the future good instead of long or big.

On a related note, while this is not an argument which deters me from longtermism, some longtermists looking forward to futures which I consider to be worthless (e.g. the hedonium shockwave) puts me off. Culturally many longtermists seem to favour more hedonism, individualism and techno-utopianism than I would like.

What would convince me otherwise:

I am well aware lots of people are pessimistic about the future because they get simple facts about how the world has been changing wrong. Yet I am interested in learning more about how different worldviews lead to perceiving the world as improving or not.

The length of the long-term future

I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.

Or looking at it differently, people working on existential risks spent some years convincing me that existentials risks are pretty big. Switching from that argument to work on existential risks to longtermism, which requires reaching existential security, gives me a sense of whiplash.

See also this shortform post on the topic. One argument brought up there is the Lindy rule, pointing out that self-propagating systems have existed for billions of years so we can expect this length again. But I do not see why self-propagating systems should be the baseline, I am only interested in applying the Lindy rule to a morally worthwhile human civilisation which has been rather short in comparison.

I am also not keen to base decisions on rough expected value calculations in which the assessment of the small probability is uncertain and the expected value is the primary argument (as opposed to a more ‘cluster thinking’ based approach). I am not in principle opposed to such decisions, but my own track record with such decisions is very poor. : the predicted expected value from back of the envelope calculations does not materialise.
I also have traditional Pascal’s mugging type concerns for prioritizing the potentially small probability of a very large civilisation.

What would convince me otherwise:

I would appreciate solid arguments on how humanity could reach existential security.

The ability to influence the long-term future

I am unconvinced that people can reliably have a positive impact which dissipates further into the future than 100 years, maybe within a factor of 3. But there is one important exception: if we have the ability to prevent or shape a “lock-in” scenario within this timeframe. By lock-in I mean anything which humanity can never escape from. Extinction risks are an obvious example, others are permanent civilisational collapse.

I am aware that Bostrom’s canonical definition of existential risks includes both of these lock-in scenarios, but it also includes scenarios which I consider to be irrelevant (failing to reach a transhumanist future), which is why I am not using the term in this section.

Thinking we cannot reliably impact the world for more than several decades, I do not find working on cause areas like ‘improving institutional decision-making’ compelling except for their ability to shape or prevent a lock-in in that timeframe..

I am also only interested in lock-in scenarios which would be as bad or worse than the current world, or maybe not much better. I am not interested in preventing a future in which humans just watch Netflix all day—it would be pretty disappointing, but at least better than a world in which people routinely starve to death.

At the moment, I do not know enough about the probabilities of a range of bad lock-in scenarios to judge whether focusing on them is warranted under my worldview. If this turns out to be the case on further investigation, I could imagine describing my worldview as longtermist when pushed, but I expect I would still feel a cultural disconnect with other longtermists.

If there are no options to avoid or shape bad lock-in scenarios within the next few decades, I expect improving the world with “traditional neartermist” approaches is best. My views here are very similar to Alexander Berger’s which he laid out in this 80,000 Hours podcast.

What would convince me otherwise:

If there have been any intentional impacts for more than a few hundred years out, I would be keen to know about them. I am familiar with Carl’s blogposts on the topic.

I expect to spend some time investigating this crux soon: if there are bad lock-in scenarios on the horizon which we can avoid or shape, that would likely change my feelings on longtermism.

Given that this is an important crux one might well consider it premature for me to draw conclusions about my worldview already. But my other views seem sufficiently different to most of the longtermist views I hear that they were hopefully worth lying out regardless.

If anyone has any resources they want to point me to which might change my mind, I am keen to hear about them.

Thanks to AGB and Linch Zhang for providing comments on a draft of this post.

This work is licensed under a Creative Commons Attribution 4.0 International License.