Why I find longtermism hard, and what keeps me motivated

[Cross-posted from the 80,000 Hours blog]

I find working on longtermist causes to be — emotionally speaking — hard: There are so many terrible problems in the world right now. How can we turn away from the suffering happening all around us in order to prioritise something as abstract as helping make the long-run future go well?

A lot of people who aim to put longtermist ideas into practice seem to struggle with this, including many of the people I’ve worked with over the years. And I myself am no exception — the pull of suffering happening now is hard to escape. For this reason, I wanted to share a few thoughts on how I approach this challenge, and how I maintain the motivation to work on speculative interventions despite finding that difficult in many ways.

This issue is one aspect of a broader issue in EA: figuring out how to motivate ourselves to do important work even when it doesn’t feel emotionally compelling. It’s useful to have a clear understanding of our emotions in order to distinguish between feelings and beliefs we endorse and those that we wouldn’t — on reflection — want to act on.

What I’ve found hard

First, I don’t want to claim that everyone finds it difficult to work on longtermist causes for the same reasons that I do, or in the same ways. I’d also like to be clear that I’m not speaking for 80,000 Hours as an organisation.

My struggles with the work I’m not doing tend to centre around the humans suffering from preventable diseases in poor countries. That’s largely to do with what I initially worked on when I came across effective altruism. For other people, it’s more salient that they aren’t actively working to prevent the barbarity of some factory farming practices. I’m not going to talk about all of the ways in which people might find it hard to focus on the long-run future — for the purposes of this article, I’m going to focus specifically on my own experience.

I feel a strong pull to help people now

A large part of the suffering in the world today simply shouldn’t exist. People are suffering and dying for want of cheap preventative measures and cures. Diseases that rich countries have managed to totally eradicate still plague millions around the world. There’s strong evidence for the efficacy of cheap interventions like insecticide-treated anti-malaria bed nets. Yet many of us in rich countries are well off financially, and spend a significant proportion of our income on non-necessity goods and services. In the face of this absurd and preventable inequity, it feels very difficult to believe that I shouldn’t be doing anything to ameliorate it.

Likewise, it often feels hard to believe that I shouldn’t be helping people geographically close to me — such as homeless people in my town, or people who are being illegitimately incarcerated in my country. It’s hard to deal with there being visible and preventable suffering that I’m not doing anything to combat.

For me, putting off helping people alive today in favour of helping those in the future is even harder than putting off helping those in my country in favour of those on the other side of the world. This is in part due to the sense that if we don’t take actions to improve the future, there are others coming after us who can. By contrast, if we don’t take action to help today’s global poor, those coming after us cannot step in and take our place. The lives we fail to save this year are certain to be lost and grieved for.

Another reason this is challenging is that wealth seems to be sharply increasing over time. This means that we have every reason to believe that people in the future will be far richer than people today, and it would seem to follow that people in the future don’t need our help as much as those in the present. There is no analogue in the case of helping people far away geographically.

The arguments for longtermism aren’t emotionally compelling to me

The reasons we have for improving the lives of those currently alive are emotionally gripping. That’s in part because these are clearly important duties weighing on us, whose force can be vitiated only by some even stronger duty. By comparison, the case for focusing on the longer term feels far more speculative, and relies on careful weighing of complex arguments.

Below I sketch out how I see the arguments for longtermism, and why — despite being convinced of them intellectually — they don’t diminish my sense that we should be alleviating present suffering instead. I’d like to note that this isn’t intended to be a rigorous statement of why we should focus on longtermist causes (which 80,000 Hours has written about elsewhere).

The future of sentient beings is potentially unimaginably large. That means if we have only a very small chance of affecting it in a lasting and positive way, taking that chance is worth it.

One way in which we could affect the long run is by preventing the extinction of all life. The fact that present people could potentially wipe out everyone to come means it isn’t true that the people who come after us will have the chance to improve the future if we don’t. It also makes irrelevant the fact that people in the future could be richer than us.

There may also be ways that the value in the future can be irreversibly curtailed due to the lock-in of a totalitarian regime rather than an extinction event. That suggests future people may well exist, but be very badly off without our intervention.

These terrible outcomes do seem possible to me. They seem to be the kinds of risks we should be investigating, to figure out whether we can reduce them. And in fact there are many reasons to think that society is usually bad at handling these types of risks: Businesses have incentives to make money in the short run, politicians want to get re-elected in the next couple of years, and individuals tend to be bad at planning (even for their own futures!).

The arguments above make sense to me and I believe them. I believe I ought to prioritise working on improving the long-run future.

Despite this, the arguments still feel speculative. And even if they’re right, there’s no guarantee that I’ll actually have any impact by e.g. improving the representation of future generations in our legislation, or by increasing the body of good global priorities research — let alone by simply trying to do either one of those. I just have to place a bet on being able to make a big positive difference, even though I know it might not work. That makes choosing to do these things — rather than e.g. donate to bednet distribution — feel uncomfortably like gambling with the lives of others.

How I handle that difficulty

Given these problems, it sometimes feels hard to be motivated to do what I think I ought to. One thing I’m heartened by is that working on the long run feels hard in precisely the way I think we should expect effective altruism to feel hard: The more salient a particular problem is — and the more compelling working on it seems — the more we should expect it to already have people tackling it. So I should expect working on the most pressing problems not to feel as intuitively urgent and important as working on some other problems. If it did, it would be less neglected.

What makes the most difference in my motivation day to day is being part of a team I deeply respect and care about. My drive to make those around me happy and to not let down my colleagues makes it easy to work hard. They don’t necessarily need to share my values — if I were earning to give, and needed to do my job well in order to maintain (and increase!) my income, I expect it would very much help me to have colleagues who cared about working to a high standard and the success of the company. In order to avoid letting them down, I imagine I’d be motivated to work hard and do my part.

Another thing which makes a significant difference to my motivation is continuing to think and talk about arguments around what causes and interventions are most pressing. One way I do this is to articulate intuitive worries I have that I’m not working on the right thing as they come up, and debate them with people who have similar values to me. Doing that helps me to get a sense for which of my views feel intuitive but I don’t ultimately believe, and which I actually endorse and can defend.

I also try to keep reading and engaging with arguments that indicate that I should work on other problems. It’s particularly important to keep questioning and fleshing out counterintuitive beliefs, because you can’t rely on your gut to tell you when you’re getting carried away (it already thinks you’re off course!).

That said, it would be disorienting and demotivating to be continuously questioning your direction or work. An important time to do this might be when you’re about to engage on a new project, or change direction significantly. (Although I also quite enjoy keeping track of interesting new arguments as they come up, for example on the EA forum.)

For me, it has also been helpful to make concrete commitments to do what’s most effective. I’m a member of Giving What We Can, which means I’ve pledged to give 10% of my income to the organisations I believe can most effectively improve the world. I actually tend to donate a bit on top of my pledge each year — some to an animal welfare organisation to offset eating meat, and some to a global development organisation (typically the Against Malaria Foundation) because I hate the idea of not doing anything to reduce global poverty. But I always give my 10% to the organisations that I think on balance will do the most good in expectation, because I promised I would.

A technique I have more mixed feelings about is making the harms or lack of benefits in the future feel more concrete. For example, I might imagine that humanity is extinguished in a man-made pandemic as a result of reckless biowarfare, and that the accessible universe then remains empty of intelligent life for eons. Thinking about examples like this give my intuitions something to latch onto, and remind me that future harms will be no less real to those experiencing them than present harms.

One of my reservations with this approach is that because there are so many possible terrible outcomes for the world, it seems potentially misleading to latch on to any specific one. Doing so might affect your actions in ways you didn’t intend. One possible way to avoid that might be to try to picture a concrete positive outcome: Set your sights on a world of flourishing beings spread across the universe. Personally, I tend to find that less motivating, in part because I think that as we’re currently constituted, living beings have a far greater capacity for pain than pleasure.

With all the above techniques, I think it really helps to have others around you who are thinking in similar ways — you can share concrete suggestions about what works, and feel the relief of knowing you’re not the only one finding things hard. Being part of the effective altruism community makes a big difference for me in these ways, whether that’s online (for example, the EA Forum) or in person (I’ve been lucky enough to usually live somewhere with a thriving local EA group).

When I’m really struggling to do the right thing, I come back to the fact that with all the uncertainty around longtermism, there is one thing I’m sure about: I care about people in the future, just like I care about people now. I would send a bednet to protect a baby, even if the baby wasn’t yet conceived, and I would train a paediatrician now for the benefit of children for decades to come.

There are so many possible people in the future who have no ability at all to advocate for themselves. Society as it stands is essentially entirely ignoring them. I can’t see those people in pictures, and I have no idea which things will actually afflict them, or if they’ll ever get to live. But I can use my career to try to make things better for them, in expectation. And I believe that’s what I should do.