This post is about a question:
What does longtermism recommend doing in all sorts of everyday situations?
I’ve been thinking (on and off) about versions of this question over the last year or two. Properly I don’t want sharp answers which try to give the absolute best actions in various situations (which are likely to be extremely context dependent and perhaps also weird or hard to find), but good blueprints for longtermist decision-making in everyday situations: pragmatic guidance which will tend to produce good outcomes if followed.
The first part of the post explains why I think this is an important question to look into. The second part talks about my current thinking and some guess answers: that everyday longtermism might involve seeking to improve decision-making all around us (skewing to more important decision-making processes), while abiding by commonsense morality.
A lot of people provided some helpful thoughts in conversation or on old drafts; interactions that I remember as particularly helpful came from: Nick Beckstead, Anna Salamon, Rose Hadshar, Ben Todd, Eliana Lorch, Will MacAskill, Toby Ord. They may not endorse my conclusions, and in any case all errors, large and small, remain my own.
Motivations for the question
There are several different reasons for wanting an answer to this. The most central two are:
Strong longtermism says that the morally right thing to do is to make all decisions according to long-term effects. But for many many decisions it’s very unclear what that means.
At first glance the strong longtermist stance seems like it might recommend throwing away all of our regular moral intuitions (since they’re not grounded in long-term effects). This could leave some dangerous gaps; we should look into whether they get rederived from different foundations, or if something else should replace them.
More generally it just seems like if longtermism is important we should seek a deep understanding of it, and for that it’s good to look at it from many angles (and everyday decisions are a natural and somewhat important class).
Having good answers to the question of everyday longtermism might be very important for the memetics / social dynamics of longtermism.
People encountering and evaluating an idea that seems like it’s claiming broad scope of applicability will naturally examine it from lots of angles.
Two obvious angles are “what does this mean for my day-to-day life?” and “what would it look like if everyone was on board with this?”.
Having good and compelling answers to these could be helpful for getting buy-in to the ideas.
I think an action-guiding philosophy is at an advantage in spreading if there are lots of opportunities for people to practice it, to observe when others are/aren’t following it, and to habituate themselves to a self-conception as someone who adheres to it.
For longtermism to get this advantage, it needs an everyday version. That shouldn’t just provide a fake/token activity, but meaningful practice that is substantively continuous with the type of longtermist decision-making which might have particularly large/important long-term impacts.
If longtermism got to millions or tens of millions of supporters—as seems plausible on timescales of a decade or three—it could be importantly bottlenecked on what kind of action-guiding advice to give people.
A third more speculative motivation is that the highest-leverage opportunities may be available only at the scale of individual decisions, so having better heuristics to help identify them might be important. The logic is outlined in the diagram below. Suppose opportunities naturally arise at many different levels of leverage (value out per unit of effort in) and scales (how much effort they can absorb before they’re saturated). In an ecosystem with lots of people seeking the best ways to help others, the large+good opportunities will all be identified and saturated. The best large opportunities left will be merely fine. For opportunities with small scale, however, there isn’t enough total value in exploiting them for the market to reliably have identified them. So there may be very high leverage opportunities left for individuals to pick up.
Of course this diagram is oversimplifying. It is an open question how efficient the altruistic market is even at large scales. It’s also an open question what the distribution of opportunities looks like to begin with. But even though it’s likely not this clean, the plausibility of the dynamic applying to some degree gives me reason to want decent guidance for longtermist action that can be applied on the everyday scale.
Interactions with patient/urgent longtermism
All of these reasons are stronger from a patient longtermist perspective than an urgent one. The more it’s the case that the crucial moments for determining the trajectory of the future will occur quite soon, the less value there is in finding a good everyday longtermist perspective (versus just trying to address the crucial problems directly). I guess it has very limited leverage on anything in the next decade or two; is very important for critical junctures that are more than fifty or a hundred years away; and somewhere in the middle for timescales in the middle.
I think as a community we should have a portfolio which is spread across different timescales, and everyday longtermism seems like a really important question for the patient end of the spectrum.
Thoughts on some tentative answers
It seems likely to me that good blueprints will involve both some proxy goals and some other heuristics to follow (proxies have some advantages but also some disadvantages; I could elaborate on the thinking here but I don’t have a super crisp way of expressing it and I’m not sure anyone will be that interested).
Good candidates for proxy goals would ideally:
Be good (in expectation) for the long-term future;
Specify something that is broad enough that many decision-situations have some interaction with the proxy;
Be robust, such that slight perturbations of the goal or the world still leave something good to aim for;
Have some continuity with good goals in more strategic (less “everyday”) situations.
Cultivating good decision-making
The proxy goal that I (currently) feel best about is improving decision-making. I don’t know how to quantify the goal exactly, but it should put more weight on bigger improvements; on more important decision-makers; and on the idea of good decision-making itself being important to foster.
An elevator pitch for a blueprint for longtermism which naturally comes with an everyday component might be something like:
We help decision-makers care for the right things, and make good choices.
We care about the long term, but it’s mostly too far off to perceive clearly what to do. So our chief task is to set up the world of tomorrow for success by having people/organisations well placed to make good decisions (in the senses both of aiming for good things, and doing a good job of that).
We practice this at every scale, with attention to how much the case matters.
At the local level, everyone can contribute to this by nudging towards good decision-making wherever they see opportunities: at work; in their community; with family and friends. Collectively, we work to identify and then take particularly good opportunities for improving decision-making. This often means aiming at improving decision-making in particularly important domains, but can also mean looking for opportunities to make improvements at large scale. Currently, much of our work is in preventing global catastrophes; improving thinking about how to prioritise; and helping expose people to the ideas of longtermism.
For background on why I think cultivating good decision-making is robustly good for the long-term, see my posts on the “web of virtue thesis” and good altruistic decision-making as a basin in idea space.
Overall I feel reasonably good about this as a candidate blueprint for longtermism:
It’s basically a single coherent thing:
It connects up to philosophical motivations and down to action-guidance
The proxy of “good decision-making” is basically one I got to by thinking with the timeless lens on longtermism
It contextualises existing EA work:
Spreading ideas of EA/longtermism are particularly important in the world today because these are crucial pieces missing from most decision-making
When we have good foresight over existential risks, these become key challenges of our time, and it becomes obviously good decision-making to work on them
There are opportunities to help improve decision-making on all sorts of different scales including the everyday, so there are lots of chances for people to get their hands dirty trying to help things
Here are some examples of everyday longtermism in pursuit of this proxy:
Alice votes for a political candidate whom she has relative trust in the character of (even though she disagrees with some of their policies), and encourages others to do the same.
Slowly contributing to the message “you need to be principled to be given power”, which both:
makes it more likely that important decision-makers are principled/trustworthy
reinforces social incentives to be of good character
Bob goes to work as a primary school teacher, and helps the children to perceive themselves as moral actors, as well as rewarding clear thinking.
Clara is a manager in the tech industry, and encourages a mindset on her team of “do things properly, and for the right reasons” rather than just chasing short-term results. She talks about this at conferences.
Diya works as a science journalist, and tries to give readers a clear picture of what we do and don’t understand about how the future might go.
Elmo hosts dinner parties, and exhibits curiosity about guests’ opinions, particularly on topics that touch on how the world may unfold over decades, warmly but robustly pushing back on parts that don’t make sense to him. He occasionally talks about his views on the importance of the long-term, and why it means it’s particularly valuable to encourage good thinking.
Abide by commonsense morality
While pursuing the proxy goal of improving decision-making, I would also recommend following general precepts of commonsense morality (e.g. don’t mislead people; try to be reliable; be considerate of the needs of others).
There are a few different reasons that I think commonsense morality is likely to be largely a good idea (expanded below):
It’s generated by a similar optimisation process as would ideally produce “commonsense longtermist morality”
It can serve as a safeguard to help avoid the perils of naive utilitarianism
It’s an expensive-to-fake signal of good intent, so can help make longtermism look good
I don’t think that commonsense morality will give on-the-nose the right recommendations. I am interested in the project of working out which pieces of commonsense morality can be gently put down, as well as which new ideas should be added to the corpus. However, I think it it will get a lot of things basically right, and so it’s a good place to start and consider deviations slowly and carefully.
Similar optimisation processes [speculative]
Commonsense morality seems like it’s been memetically selected to produce good outcomes for societies in which it’s embedded, when systematically implemented at local level.
We’d like a “commonsense longtermist morality”, something selected to produce good outcomes for larger future society, when systematically implemented at local level. Unfortunately finding that might be difficult. There’s a lot of texture to commonsense morality; we might expect that the ideal version of commonsense longtermist morality would have similar amounts of texture. And we can’t just run an evolutionary process to find a good version.
However, in an ideal world we might be able to run such an evolutionary process—letting people take lots of different actions, magically observing which turned out well or badly for the long-term, and then selectively boosting the types of behaviour that produced good results. That hypothetical process would be somewhat similar to the real process that has produced commonsense morality. The outputs of those processes would be optimised for slightly different things, but I expect there would be a significant degree of alignment, since both outcomes benefit from generally-good-&-fair decisions, and are hurt by selfishness, corruption, etc.
Safeguarding against naive utilitarianism
Roughly speaking, I think people sometimes think of the relationship between EA and traditional everyday morality as something like differing points on a spectrum:
I think that these two dimensions are actually more orthogonal than in opposition (the spectrum only appears when you think about a tug of war about what “good” is):
Utilitarianism (along with other optimising behaviours) gets kind of a bad reputation. I think this is significantly because naive application can cause large and real harms, when the mechanisms of harm are indirect and so easy to sometimes lose sight of. I think that commonsense morality can serve as a backstop which stops a lot of these bad effects.
We might think of commonsense morality as a memetic force exerting upwards pressure in this diagram. What we might call “strategic longtermism” is a memetic force exerting rightwards pressure (which seems much more undersupplied in society as a whole). But given the asymmetry between the top-left and bottom-right quadrants, we’d ideally like a memetic force which routes via the top-left. I think this provides some reason to bundle (something in the vicinity of) commonsense morality in as part of the memetic package of longtermism. And then since (as I argued above) part of the purpose of having a good blueprint for everyday longtermism is to help the memetics of it by giving people to practice, this should presumably be part of that blueprint. (And since commonsense morality is already something like a known quantity, it doesn’t cost a great deal of complexity to include it as part of the message.)
As longtermism grows as a cultural force, it seems likely that people encountering it will use a variety of different means to judge what they think of it. Some people will try to examine the merits of the philosophical arguments; some people will consider whether the actions it recommends seem to make sense; some people will question the motivations of the people involved in promoting it.
I think that a lot of parts of commonsense morality stand in the way of corruption. This means that abiding by commonsense morality is relatively cheap for actors who are not self-interested, and relatively expensive for actors who are primarily self-interested. So it’s a costly signal of good intent. I think this will help to make it seem legitimate and attractive to people.
To be clear, if I thought abiding by commonsense morality was otherwise a bad idea, this consideration would be quite unlikely to tip it over into worthwhile. But I think it’s anyway a good idea, and this consideration provides reason to prioritise it even somewhat beyond what would be suggested by its own merits.
I’ve laid out a case for considering the question of everyday longtermism. I feel confident that this is an important question, and think it deserves quite a bit of attention from the community.
I’ve also given my current picture of what seems like a good blueprint for everyday longtermism. I’d be surprised if that’s going in totally the wrong direction, but I’d love to hear arguments that it is. On the other hand I think it’s quite possible that the high-level picture is slightly off, very likely that I’ve made some errors of judgement in the details, and nigh-certain that there’s a lot more detail that could productively be hashed out.