Should EA be explicitly long-termist or uncommitted?

One cause of tension within EA is between those who celebrate the increasing focus of highly-engaged EAs and major orgs on long-termism and those who embrace the older, short-termism focus on high standards of evidence. While I tend to fall into the former group, I’m also concerned by some of the comments I’ve heard from people in the latter group wondering how much of a place there is for them in EA.

Like many questions there’s no reason to limit our answer to a binary “yes” or “no”. There are different degrees and ways in which we could be explicitly long-termist and the main such ways need to be considered separately.

The simplest question to address is whether organisations that have come to embrace long-termism in both their thought and actions should explicitly adopt that label[1]. I think that they have to; both because of the importance of trust in terms of cooperation and because such a transparent trick would never work over the long-term. If any dissatisfied short-termists focus their frustration on such explicit declarations, then I would see this as both misguided and counterproductive since it’s important for these organisation to be open about where they stand.

The next question is how much should the increasing support of long-termists among highly-engaged EAs and orgs affect the distribution of resources[2]. Perhaps there would be a less contentious way of framing this issue, but I think it’s important to face this issue openly, rather than to discuss it in the evasive terms that a politician might use. Again, I think the answer here is very simple and that, of course, it should affect the distribution. I’m not suggesting that the distribution of resources should be a mere popularity contest, but insofar as we respect the opinions of people within this group it ought to cause some kind of Bayesian update[3].

I guess the last question I’ll consider is to what degree it ought to affect the distribution of resources[4]. Firstly, there are the moral uncertainty arguments which Will Macaskill has covered sufficiently such that there’s no need for me to go over it here.

Secondly, many of the short-term projects that EA has pursued have been highly effective and I would see it as a great loss if such projects were to suddenly have the rug pulled out from underneath them. Large shifts and sudden shifts have all kinds of negative consequences from demoralizing staff, to wasting previous investments in staff and infrastructure, to potentially bankrupting what would otherwise have been sustainable.

Stepping beyond the direct consequences, I would also be worried in terms of what this means for what really appears to have been a highly beneficial alliance between people favoring different cause areas which I believe to have benefited each cause area up until now. Many EA groups are rather small. There appears to be a minimum critical-mass in order for a group to be viable and if too many short-termists were to feel unsupported[5] many groups might not be able to achieve this critical mass. This is especially concerning given how many long-termists (myself included) passed through a period of short-termism first.

There are also important economies of scale, such as having a highly-skilled movement builder promoting EA in general, rather than a specific cause area. So too with having some kind of national infrastructure for donation routing and organising a local EAGx. Long-termist organisations also benefit from being able to hire ops people who are value-aligned, but not explicitly long-termist.

Beyond this, I think there are significant benefits of EA pursuing projects which provide more concrete feedback and practical lessons than long-termist projects often provide. I see these projects as important for the epistemic health of the movement as a whole.

Perhaps it feels like I’m focusing too much upon the long-termist perspective, but my goal in the previous paragraphs was to demonstrate that even from a purely long-termist perspective too much of a shift towards long-termism would be counterproductive[6].

Nonetheless, the increasing prominence of long-termism suggests that EA may need to rebalance its relationship towards short-termist projects in such a way that respects both the increasing prominence of long-termism and the valuable contributions of short-termists.

You may be wondering, is such a thing even possible? I think it is, although it would involve shifting some resources dedicated towards short-termism[7] from supporting short-termist projects to directly supporting short-termists[8]. I think that if the amount of resources available is reduced, it is natural to adopt a strategy that could be effective with smaller amounts of money[9].

And the strategy that seems most sensible to me would be to increase the focus on incubation and seed funding, with less of a focus on providing long-term funding, though a certain level of long-term funding would still be provided by dedicated short-termist EAs[10]. I would also be keen on increased support for organisations such as Giving What We Can, Raising for Effective Giving and Founder’s Pledge as insofar as they can attract funding from outside the EA community to short-termist projects, they can make up some of the shortfall from shifting the focus of the EA community more towards long-termism.

Beyond this, insofar as organisations lean heavily long-termist, it makes sense to create new organisations to serve EAs more generally. The example that immediately springs to mind is how in addition to 80,000 Hours, there are now groups such as Probably Good and Animal Advocacy Careers.

Alternatively, some of these problems could be resolved by long-termists building their own infrastructure. For example, the existence of the Alignment Forum means that the EA Forum isn’t overrun with AI Safety Discussion. Similarly, if EAG is being overrun with AI Safety people, it might almost make sense to run two simultaneous conferences right next to each other such that people interested in global poverty are able to network with each other or they can walk over to the next hall if they want to mix with some people interested in AI Safety.

So to summarise:

  • Organisations should be open about where they stand in relation to long-termism.

  • The distribution of resources should reflect the growing prominence of long-termism to some degree, but it would be a mistake to undervalue short-termism, especially if this led the current alliance to break down[11].

  • There should be less of a focus on providing long-term funding for direct short-termist work and more focus on incubation, seed funding and providing support for short-termists within the EA community. This would increase the amount of resources available to long-termist projects, whilst also keeping the current alliance system strong.

  • Long-termists should also develop their own institutions as a way of reducing contention over resources.

  1. ^

    This isn’t a binary. An organisation may say, “Our strategy is mostly focused upon long-termist aims, but we make sure to dedicate a certain amount of resources towards promising short-termist projects as well”.

  2. ^

    I’m using resources in a broad sense here to include everything from funding to attention to advice to slots at EAG. Also, given the amount of resources being deployed by EA is increasing, a shift in the distribution of resources towards long-termism may still involve an increase in the absolute number of resources dedicated towards short-termist projects.

  3. ^

    The focus of this article is not so much on arguing in favour of long-termism—other people have covered this sufficiently—but in thinking through the strategic consequences of this.

  4. ^

    One point I don’t address in the main body of the text is how much of a conflict there actually is between investing resources in long-termism and short-termism. This is especially the case in terms of financial resources given how well-funded EA is these days. I guess I’m skeptical of the notion that we wouldn’t be able to find net-positive things to do with it, but even if there isn’t really a conflict now, I suspect that there will be in the near future as AI Safety projects scale up.

  5. ^

    I’m not entirely happy with this wording, as the goal isn’t merely to make short-termists feel supported, but to actually be providing useful support in terms of allowing them to have the greatest impact possible.

  6. ^

    I’ve intentionally avoided going too much into the PR benefits of short-termism for EA because of the potential for PR concerns to distort the epistemology of community, as well as being corrosive to internal trust. Beyond this, it is could easily be counter-productive because people can often tell if you’re just doing something for PR, as well as being extremely patronising towards short-termists. For these reasons I’ve focused on the model of EA as a mutually beneficial alliance between people who hold different views.

  7. ^

    Here I’m primarily referring to resources being donated to short-termism by funders that aren’t necessarily short-termist themselves, but who dedicated some amount of funding due to factors such as moral uncertainty. These funders are most likely to be sympathetic to the proposal I’m making here.

  8. ^

    I accept there are valid concerns about the tendency of organisations to focus on the interests of in-group members at the expense of the stated mission. On the other hand, I see it as equally dangerous to swing too much in the other direction where good people leave or become demoralised because of insufficient support. We need to chart a middle course.

  9. ^

    Large-scale poverty reduction projects cost tens of millions of dollars, so a small percentage reduction in the amount of money dedicated towards short-termist projects would enable a significant increase in the amount of money for supporting short-termists.

  10. ^

    Additionally, I’m not claiming that persuadable EAs should move completely to this model; just somewhere along this axis.

  11. ^

    Stronger: In any kind of relationship, you really don’t want to be anywhere near the minimal level of trust or support to keep things together.