3 suggestions about jargon in EA

Summary and purpose

I suggest that effective altruists should:

  1. Be careful to avoid using jargon to convey something that isn’t what the jargon is actually meant to convey, and that could be conveyed well without any jargon.

    • As examples, I’ll discuss misuses I’ve seen of the terms existential risk and the unilateralist’s curse, and the jargon-free statements that could’ve been used instead.

  2. Provide explanations and/​or hyperlinks to explanations the first time they use jargon.

  3. Be careful to avoid implying jargon or concepts originated in EA when they did not.

I’m sure similar suggestions have been made before, both within and outside of EA. This post’s purpose is to collect the suggestions together in one post that (a) can be linked to, and (b) has this as its sole focus (rather than touching on these suggestions in passing).

This post is intended to provide friendly suggestions rather than criticisms. I’ve sometimes failed to follow these suggestions myself.

1. Avoid misuse

The upside of jargon is that it can efficiently convey a precise and sometimes complex idea. The downside is that jargon will be unfamiliar to most people. I’ve seen instances where EAs or EA-aligned people have used jargon to convey something other than what the jargon is meant to convey. This erodes the upside of that jargon, while also unnecessarily having that downside of unfamiliarity. In these instances, it would be better to say what one is trying to say without jargon (or with the different, appropriate jargon).

Of course, “avoid misuse” is a hard principle to disagree with—but how do you implement it, in this case? I have two concrete suggestions (though I’m sure other suggestions could be made as well):

  • Before using jargon, think about whether you’ve actually read the source that introduced that jargon, and/​or the most prominent source that used the jargon (i.e., the “go-to” reference). If you haven’t, perhaps read that before using the jargon. If you read that a long time ago, perhaps double-check it.

  • See whether you can say the same idea without the jargon, at least in your own head. This may help you realise that you’re unsure what the jargon means. Or it may help you realise that the idea is easy to convey without the jargon.

I’ll now give two examples I’ve come across of the sort of misuse I’m talking about.

Existential risk

For details, see Clarifying existential risks and existential catastrophes.

What the term is meant to refer to: The most prominent definitions of existential risk are the following:

An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, 2012)

And:

An existential risk is a risk that threatens the destruction of humanity’s longterm potential (Ord, 2020)

Both authors make it clear that this refers to more than just extinction risk. For example, Ord breaks existential catastrophes down into three main types: extinction, unrecoverable collapse, and unrecoverable dystopia.

What the term is sometimes mistakenly used for: The term existential risk is sometimes used when the writer or speaker is actually referring only to extinction risk (e.g., in this post, this podcast, and this post). This is a problem because:

  • This makes the statements unnecessarily hard to understand for non-EAs.

  • We could suffer an existential catastrophe even if we do not suffer extinction, and it’s important to remain aware of this.

It would be better for these speakers and writers to just say “extinction risk”, as that term is more sharply defined, more widely understood, and a better fit for what they’re saying than is the term “existential risk” (see also Cotton-Barratt and Ord).

A separate problem is that the term existential risk is sometimes used when the writer or speaker is actually referring to global catastrophic risks. This invites confusion and concept creep, and should be avoided.

Unilateralist’s curse

What the term is meant to refer to: Bostrom, Douglas, and Sandberg write:

In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal.
[...] The unilateralist’s curse is closely related to a problem in auction theory known as the winner’s curse. The winner’s curse is the phenomenon that the winning bid in an auction has a high likelihood of being higher than the actual value of the good sold. Each bidder makes an independent estimate and the bidder with the highest estimate outbids the others. But if the average estimate is likely to be an accurate estimate of the value, then the winner overpays. The larger the number of bidders, the more likely it is that at least one of them has overestimated the value.

What the term is sometimes mistakenly used for: I’ve sometimes seen “unilateralist’s curse” used to refer to the idea that, as the number of people or small groups capable of causing great harm increases, the chances that at least one of them does so increases, and may become very high. This is because many people are careless, many people are well-intentioned but mistaken about what would be beneficial, and some people are malicious. For example, as biotechnology becomes “democratised”, we may face increasing risks from reckless curiosity-driven experimentation, reckless experimentation intended to benefit society, and deliberate terrorism. (See The Vulnerable World Hypothesis.)

That idea indeed involves the potential for large harms from unilateral action. But the unilateralist’s curse is more specific: it refers to a particular reason why mistakes in estimating the value of unilateral actions may lead to well-intentioned actors frequently causing harm. So the curse is relevant to harms from people who are well-intentioned but mistaken about what would be beneficial, but it is not clearly relevant to harms from people who are just careless or malicious.

2. Provide explanations and/​or links

There is a lot of jargon used in EA. Some of it is widely known among EAs. Some of it isn’t. And I doubt any of it is universally known among EAs, especially when we consider relatively new EAs.

Additionally, in most cases, it would be good for our statements and writings to also be accessible to people who aren’t part of the EA community. This is because the vast majority of people—and even the vast majority of people actively trying to do good—aren’t part of the EA community (see Moss, 2020). (I say “in most cases” because of things like information hazards.)

Therefore, when first using a particular piece of jargon in a conversation, post, or whatever, it will often be valuable to provide a brief explanation of what it means, and/​or a link to a good source on the topic. This helps people understand what you’re saying, introduces them to a (presumably) useful concept and perhaps body of work, and may make them feel more welcomed and less disorientated or excluded. It also doesn’t take long to do this, especially after the first time you choose a “go-to” link for that concept.

3. Avoid incorrectly implying that things originated in EA

It seems to me that people in the EA community have developed a remarkable number of very useful concepts or terms. For example, information hazards, the unilateralist’s curse, surprising and suspicious convergence, and the long reflection. But this is only a subset of the very useful concepts or terms used in EA. For example, the ideas of comparative advantage, counterfactual impact, and moral uncertainty each predate the EA movement.

It’s important to remember that many of the concepts used in EA originated outside of it, and to avoid implying that a concept originated in EA when it didn’t, because doing so can:

  • Help us find relevant bodies of work from outside EA

  • Help us avoid falling into arrogance or insularity, or forgetting to engage with the wealth of valuable knowledge and ideas generated outside of EA

  • Help us avoid coming across as arrogant, insular, or naive

    • For example, I was at an EA event also attended by an experienced EA, and by a newcomer with a background in economics. The experienced EA told the newcomer about a very common concept from economics as if it would be new to them, and said it was a “concept from EA”. The newcomer clearly found this strange and off-putting.

(That said, I do think that, even when concepts originated outside of EA, EA has been particularly good at collecting, further developing, and applying them, and that’s of course highly valuable work. My thanks to David Kristoffersson for highlighting that point in conversation.)

Closing remarks

I hope my marshalling of these common suggestions will be useful to some people. Feel free to make additional related suggestions in the comments, or to bring up your own pet-peeve misuses!