3 suggestions about jargon in EA
Summary and purpose
I suggest that effective altruists should:
Be careful to avoid using jargon to convey something that isn’t what the jargon is actually meant to convey, and that could be conveyed well without any jargon.
As examples, I’ll discuss misuses I’ve seen of the terms existential risk and the unilateralist’s curse, and the jargon-free statements that could’ve been used instead.
Provide explanations and/or hyperlinks to explanations the first time they use jargon.
Be careful to avoid implying jargon or concepts originated in EA when they did not.
I’m sure similar suggestions have been made before, both within and outside of EA. This post’s purpose is to collect the suggestions together in one post that (a) can be linked to, and (b) has this as its sole focus (rather than touching on these suggestions in passing).
This post is intended to provide friendly suggestions rather than criticisms. I’ve sometimes failed to follow these suggestions myself.
1. Avoid misuse
The upside of jargon is that it can efficiently convey a precise and sometimes complex idea. The downside is that jargon will be unfamiliar to most people. I’ve seen instances where EAs or EA-aligned people have used jargon to convey something other than what the jargon is meant to convey. This erodes the upside of that jargon, while also unnecessarily having that downside of unfamiliarity. In these instances, it would be better to say what one is trying to say without jargon (or with the different, appropriate jargon).
Of course, “avoid misuse” is a hard principle to disagree with—but how do you implement it, in this case? I have two concrete suggestions (though I’m sure other suggestions could be made as well):
Before using jargon, think about whether you’ve actually read the source that introduced that jargon, and/or the most prominent source that used the jargon (i.e., the “go-to” reference). If you haven’t, perhaps read that before using the jargon. If you read that a long time ago, perhaps double-check it.
I suggest this in part because I suspect people often encounter jargon second-hand, leading to a “telephone game” effect.
See whether you can say the same idea without the jargon, at least in your own head. This may help you realise that you’re unsure what the jargon means. Or it may help you realise that the idea is easy to convey without the jargon.
I’ll now give two examples I’ve come across of the sort of misuse I’m talking about.
Existential risk
For details, see Clarifying existential risks and existential catastrophes.
What the term is meant to refer to: The most prominent definitions of existential risk are the following:
An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, 2012)
And:
An existential risk is a risk that threatens the destruction of humanity’s longterm potential (Ord, 2020)
Both authors make it clear that this refers to more than just extinction risk. For example, Ord breaks existential catastrophes down into three main types: extinction, unrecoverable collapse, and unrecoverable dystopia.
What the term is sometimes mistakenly used for: The term existential risk is sometimes used when the writer or speaker is actually referring only to extinction risk (e.g., in this post, this podcast, and this post). This is a problem because:
This makes the statements unnecessarily hard to understand for non-EAs.
We could suffer an existential catastrophe even if we do not suffer extinction, and it’s important to remain aware of this.
It would be better for these speakers and writers to just say “extinction risk”, as that term is more sharply defined, more widely understood, and a better fit for what they’re saying than is the term “existential risk” (see also Cotton-Barratt and Ord).
A separate problem is that the term existential risk is sometimes used when the writer or speaker is actually referring to global catastrophic risks. This invites confusion and concept creep, and should be avoided.
Unilateralist’s curse
What the term is meant to refer to: Bostrom, Douglas, and Sandberg write:
In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal.
[...] The unilateralist’s curse is closely related to a problem in auction theory known as the winner’s curse. The winner’s curse is the phenomenon that the winning bid in an auction has a high likelihood of being higher than the actual value of the good sold. Each bidder makes an independent estimate and the bidder with the highest estimate outbids the others. But if the average estimate is likely to be an accurate estimate of the value, then the winner overpays. The larger the number of bidders, the more likely it is that at least one of them has overestimated the value.
What the term is sometimes mistakenly used for: I’ve sometimes seen “unilateralist’s curse” used to refer to the idea that, as the number of people or small groups capable of causing great harm increases, the chances that at least one of them does so increases, and may become very high. This is because many people are careless, many people are well-intentioned but mistaken about what would be beneficial, and some people are malicious. For example, as biotechnology becomes “democratised”, we may face increasing risks from reckless curiosity-driven experimentation, reckless experimentation intended to benefit society, and deliberate terrorism. (See The Vulnerable World Hypothesis.)
That idea indeed involves the potential for large harms from unilateral action. But the unilateralist’s curse is more specific: it refers to a particular reason why mistakes in estimating the value of unilateral actions may lead to well-intentioned actors frequently causing harm. So the curse is relevant to harms from people who are well-intentioned but mistaken about what would be beneficial, but it is not clearly relevant to harms from people who are just careless or malicious.
2. Provide explanations and/or links
There is a lot of jargon used in EA. Some of it is widely known among EAs. Some of it isn’t. And I doubt any of it is universally known among EAs, especially when we consider relatively new EAs.
Additionally, in most cases, it would be good for our statements and writings to also be accessible to people who aren’t part of the EA community. This is because the vast majority of people—and even the vast majority of people actively trying to do good—aren’t part of the EA community (see Moss, 2020). (I say “in most cases” because of things like information hazards.)
Therefore, when first using a particular piece of jargon in a conversation, post, or whatever, it will often be valuable to provide a brief explanation of what it means, and/or a link to a good source on the topic. This helps people understand what you’re saying, introduces them to a (presumably) useful concept and perhaps body of work, and may make them feel more welcomed and less disorientated or excluded. It also doesn’t take long to do this, especially after the first time you choose a “go-to” link for that concept.
3. Avoid incorrectly implying that things originated in EA
It seems to me that people in the EA community have developed a remarkable number of very useful concepts or terms. For example, information hazards, the unilateralist’s curse, surprising and suspicious convergence, and the long reflection. But this is only a subset of the very useful concepts or terms used in EA. For example, the ideas of comparative advantage, counterfactual impact, and moral uncertainty each predate the EA movement.
It’s important to remember that many of the concepts used in EA originated outside of it, and to avoid implying that a concept originated in EA when it didn’t, because doing so can:
Help us find relevant bodies of work from outside EA
Help us avoid falling into arrogance or insularity, or forgetting to engage with the wealth of valuable knowledge and ideas generated outside of EA
Help us avoid coming across as arrogant, insular, or naive
For example, I was at an EA event also attended by an experienced EA, and by a newcomer with a background in economics. The experienced EA told the newcomer about a very common concept from economics as if it would be new to them, and said it was a “concept from EA”. The newcomer clearly found this strange and off-putting.
(That said, I do think that, even when concepts originated outside of EA, EA has been particularly good at collecting, further developing, and applying them, and that’s of course highly valuable work. My thanks to David Kristoffersson for highlighting that point in conversation.)
Closing remarks
I hope my marshalling of these common suggestions will be useful to some people. Feel free to make additional related suggestions in the comments, or to bring up your own pet-peeve misuses!
- EA should taboo “EA should” by 29 Mar 2022 9:07 UTC; 210 points) (
- We all teach: here’s how to do it better by 30 Sep 2022 2:06 UTC; 172 points) (
- Four categories of effective altruism critiques by 9 Apr 2022 15:48 UTC; 99 points) (
- When you shouldn’t use EA jargon and how to avoid it by 26 Oct 2020 12:48 UTC; 88 points) (
- Venn diagrams of existential, global, and suffering catastrophes by 15 Jul 2020 12:28 UTC; 81 points) (
- New Epistemics Tool: ThEAsaurus by 1 Apr 2024 15:32 UTC; 79 points) (
- Native languages in the EA community (and issues with assessing promisingness) by 27 Dec 2021 2:01 UTC; 72 points) (
- Distillation and research debt by 15 Mar 2022 11:45 UTC; 69 points) (
- Epistemic status: an explainer and some thoughts by 31 Aug 2022 13:59 UTC; 59 points) (
- Guide to norms on the Forum by 28 Apr 2022 13:28 UTC; 59 points) (
- Do research organisations make theory of change diagrams? Should they? by 22 Jul 2020 4:58 UTC; 52 points) (
- EA Forum Prize: Winners for July 2020 by 8 Oct 2020 9:16 UTC; 28 points) (
- How to Write Readable Posts by 20 Oct 2022 7:48 UTC; 24 points) (
- What has helped you write better? by 12 Nov 2021 18:54 UTC; 18 points) (
- 15 Oct 2020 10:07 UTC; 17 points) 's comment on Avoiding Munich’s Mistakes: Advice for CEA and Local Groups by (
- 26 Aug 2020 6:57 UTC; 12 points) 's comment on EA Organization Updates: July 2020 by (
- 8 Jul 2020 2:47 UTC; 11 points) 's comment on Some history topics it might be very valuable to investigate by (
- 3 May 2021 14:35 UTC; 8 points) 's comment on Thoughts on “A case against strong longtermism” (Masrani) by (
- 22 Nov 2021 21:28 UTC; 8 points) 's comment on The Case for Reducing EA Jargon & How to Do It by (
- How to Write Readable Posts by 20 Oct 2022 7:48 UTC; 7 points) (LessWrong;
- 14 Dec 2021 8:02 UTC; 4 points) 's comment on 3 suggestions about jargon in EA by (
- 6 Jul 2020 5:34 UTC; 2 points) 's comment on Estimating the Philanthropic Discount Rate by (
I agree with these recommendations, thanks for providing a resource one can conveniently link to. (I also thought I remembered a very similar post from a couple of years ago, but wasn’t able to find it. So maybe I made that up.)
I still remember an amusing instance of what you address in #3: A few years ago, an EA colleague implied that the metaphor “carving reality at its joint” was rationalist/LessWrong terminology. But in fact it’s commonly thought to be coined by Plato, and frequently used in academic philosophy.
Good example! And it makes me realise that perhaps I should’ve indicated that the scope of this post should be jargon in EA and the rationality community, as I think similar suggestions would be useful there too.
A post that feels somewhat relevant, though it’s not about jargon, is Less Wrong Rationality and Mainstream Philosophy. One quote from that: “Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century.”
(None of this is to deny that the EA and rationality communities are doing a lot of things excellently, and “punching well above their weight” in terms of insights had, concepts generated/collected/refined, good done, etc. It’s merely to deny a particularly extreme view on EA/rationality’s originality, exceptionalism, etc.)
I also wrote about a third example of misuse of jargon I’ve seen, but then decided it wasn’t really necessary to include a third example. Here the example is anyway, for anyone interested:
Cluelessness
What the term is meant to refer to: “Cluelessness” is a technical term within philosophy. It seems to have been used to refer to multiple different concepts (e.g., “simple” vs “complex” cluelessness). These concepts are complicated, and I tentatively believe that they’re not useful, so I won’t try to explain them here. If you’re interested in the concept, see Greaves, Greaves & Wiblin, or Mogensen.
What the term is sometimes mistakenly used for: I’ve seen some effective altruists use the term “cluelessness” to refer simply to the idea that it’s extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive. This idea seems to me clearly true and important. But it can also be easily and concisely expressed without jargon.
And I’m almost certain that philosophers writing about “cluelessness” very specifically want the term to mean something more specific than just the above idea. This is because they want to talk about situations in which expected value reasoning might be impossible or ill-advised. (I tentatively disagree with the distinction and implications they’re drawing, but it seems useful to recognise that they wish to draw that distinction and those implications.)
I actually think this is a tricky case where the boundary to misuse is hard to discern. (I do agree that, in many contexts, the idea can and should “be easily and concisely expressed without jargon”.)
This is because I think philosophical work on cluelessness is at its core motivated by it being “extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive”. To be fair, it’s motivated by a bit more, but arguably not much more: that bit is roughly that some philosophers think that the “hard to predict” observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value. But importantly, there is no consensus about what the correct “theoretical account” of cluelessness is: to name just a few, it might be non-sharp credences, it might be low credal resilience, or it might just be a case where expected-value reasoning is hard but we can’t say anything interesting about it after all. Still, cluelessness is a term proponents of all these different views use.
I think this is a quite common situation in philosophy: at least some people have a ‘pre-theoretic intuition’ that seems to point to some philosophically interesting concept, but philosophers can’t agree on what it is, what its properties are, or even whether the intuition refers to anything at all. Analogs might be:
‘Free will’. Philosophers can’t agree if it’s about being responsive to reasons, having a mesh of desires with certain properties, being ultimately responsible for one’s actions, a “could have done otherwise” ability, or something else; whether it’s compatible with determinism; whether it’s simply an illusion. But it would be odd to say that “using free will to simply refer to the idea that certain (non-epistemic) conditions need to be fulfilled for someone to be morally responsible for their actions—in a way in which e.g. an addict isn’t” was a “misuse” of the term because free will is a technical term in philosophy by which people mean something more specific.
‘Truth’. Philosophers can’t agree if it’s about correspondence to reality, coherence, or something else; whether it’s foundational for meaning or the other way around; or if it’s just a word the semantics of which is fully captured by sentences like “‘snow is white’ is true if and only if snow is white” and we can’t have any interesting theory about. But it would be odd to say that garden-variety uses of “true” are misuses.
‘Consciousness’: …
And so on.
Thanks for this comment. I found it interesting both as pushback on my point, and as a quick overview of parts of philosophy!
Some thoughts in response:
Firstly, I wrote “I also wrote about a third example of misuse of jargon I’ve seen, but then decided it wasn’t really necessary to include a third example”. Really, I should’ve written ”...but then decided it wasn’t really necessary to include a third example, and that that example was one I was less sure of and where I care less about defending the jargon in question.” Part of why I was less sure is that I’ve only read two papers on the topic (Greaves’ and Mogensen’s), and that was a few months ago. So I have limited knowledge on how other philosophers use the term.
That said, I think the term’s main entry point into EA is Greaves’ and Mogensen’s papers and Greaves on the 80k podcast (though I expect many EAs heard it second-hand rather than from these sources directly). And it seems to me that at least those two philosophers want the term to mean something more specific than “it’s extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive”, because otherwise the term wouldn’t include the idea that we can’t just use expected value reasoning. Does that sound right to you?
More generally, I got the impression that cluelessness, as used by academics, refers to at least that idea of extreme difficulty in predictions, but usually more as well. Does that sound right?
This might be analogous to how existential risk includes extinction risk, but also more. In such cases, if one is actually just talking about the easy-to-express-in-normal-language component of the technical concept rather than the entire technical concept, it seems best to use normal language rather than the jargon.
Then there’s the issue that “cluelessness” is also just a common term in everyday English, like free will and truth, and unlike existential risk or the unilateralist’s curse. That does indeed muddy the matter somewhat, and reduce the extent to which “misuse” would confuse or erode the jargon’s meaning.
One thing I’d say there is that, somewhat coincidentally, I’ve found the phrase “I’ve got no clue” a bit annoying for years before getting into EA, in line with my general aversion to absolute statements and black-and-white thinking. Relatedly, I think that, even ignoring the philosophical concept, “cluelessness about the future”, taken literally, implies something extreme and similar to what I think the philosophical concept is meant to imply. Which seems like a very small extra reason to avoid it when a speaker doesn’t really mean to imply we can’t know anything about the consequences of our actions. But that’s probably a fairly subjective stance, which someone could reasonably disagree with.
It sounds like we’re at least roughly on the same page. I certainly agree that e.g. Greaves and Mogensen don’t seem to think that “the long-term effect of our actions is hard to predict, but this is just a more pronounced version of it being harder to predict the weather in one day than in one hour, and we can’t say anything interesting about this”.
As I said:
I would still guess that to the extent that these two (and other) philosophers advance more specific accounts of cluelessness—say, non-sharp credence functions—they don’t take their specific proposal to be part of the definition of cluelessness, or to be criterion for whether the term ‘cluelessness’ refers to anything at all. E.g. suppose philosopher A thought that cluelessness involves non-sharp credence functions, but then philosopher B convinces them that the same epistemic state is better described by having sharp credence functions with low resilience (i.e. likely to change a lot in response to new evidence). I’d guess that philosopher A would say “you’ve convinced me that cluelessness is just low credal resilience instead of having sharp credence functions” as opposed to “you’ve convinced me that I should discard the concept of cluelessness—there is no cluelessness, just low credal resilience”.
(To be clear, I think in principle either of theses uses of cluelessness would be possible. I’m also less confident that my perception of the common use is correct than I am for terms that are older and have a larger literature attached to them, such as the examples I gave in my previous comment.)
Hmm, looking again at Greaves’ paper, it seems like it really is the case that the concept of “cluelessness” itself, in the philosophical literature, is meant to be something quite absolute. From Greaves’ introduction:
So at least in her account of how other philosophers have used the term, it refers to not having “even the faintest idea” which act is better. This also fits with what “cluelessness” arguably should literally mean (having no clue at all). This seems to me (and I think to Greaves’?) quite distinct from the idea that it’s very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this “simple cluelessness”, and introduces the idea of “complex cluelessness” for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Having not read many other papers on cluelessness, I can’t independently verify that Greaves is explaining their usage of “cluelessness” well. But from that, it does seem to me that “cluelessness” is intended to refer to something more specific (and, in my view, less well-founded) than what I’ve seen some EAs use it to refer to (the very true and important idea that many actions are very very very hard to predict the value of).
(Though I haven’t re-read the rest of the paper for several months, so perhaps “never have even the faintest idea” doesn’t mean what I’m taking it to mean, or there’s some other complexity that counters my points.)
*I’m now stepping away from saying “extremely hard to predict”, because one might argue that that should, taken literally, mean “as hard to predict as anything could ever be”, which might be the same as “so hard to predict that we can’t have even the faintest idea”.
Thanks for this post!
Jargon has another important upside: its use is a marker of in-group belonging. So, especially IRL, employing jargon might be psychologically or socially useful for people who are not immediately perceived as belonging in EA, or feel uncertain whether they are being perceived as belonging or not.
Because jargon is a marker of in-group belonging, I fear that giving an unprompted explanation could be alienating to someone who makes the implication that jargon is being explained to them because they’re perceived as not belonging. (E.g., “I know what existential risk is! Would this person feel the need to explain this to me if I were white/male/younger?”) In some circumstances, explaining jargon unprompted will be appreciated and inclusionary, but I think it’s a judgment call.
Yes, I think these are all valid points. So my suggestion would indeed be to often provide a brief explanation and/or a link, rather than to always do that. I do think I’ve sometimes seen people explain jargon unnecessarily in a way that’s a bit awkward and presumptuous, and perhaps sometimes been that person myself.
In my articles for the EA Forum, I often include just links rather than explanations, as that gives readers the choice to get an explanation if they wish. And in person, I guess I’d say that it’s worth:
entertaining both the hypothesis that using jargon without explanation would make someone feel confused/excluded, and the hypothesis that explaining jargon would make the person feel they’re perceived as more of a “newcomer” than they really are
then trying to do whatever seems best based on the various clues and cues
with the options available including more than just “assume they know the jargon” and “assume they don’t and therefore do a full minute spiel on it”; there are also options like giving a very brief explanation that feels natural, or asking if they’ve come across that term
One last thing I’d say is that I think the fact jargon is used as a marker of belonging is also another reason to sometimes use jargon-free statements or explain the jargon, to avoid making people who don’t know the jargon feel excluded. (I guess I intended that point to be implicit in saying that explanations and/or hyperlinks of jargon “may make [people] feel more welcomed and less disorientated or excluded”.)
Did “information hazard” originate in EA? Plenty of results on Google for “dangerous information” and “dangerous knowledge”, which I think mean almost the same thing, although I suppose “information hazard” refers to the risk itself, while “dangerous information” and “dangerous knowledge” refer to the information/knowledge and might suggest likely harm rather than just risk.
One aspect of how “information hazard” tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].
This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the “infohazard” concept distinctive.
Not 100% unprecedentedly new, or anything, but rare in modern Western discourse pre-Bostrom.
I think a lot of people would view those selfish/tribal reasons as reasonable/defensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-large’s future.
Brian Nosek talks about this a lot – academics mostly want to be more open but view being so as against their own best interests.
Is discourse around lying/concealing information out of altruistic concern really that rare in Western cultures?
I feel like lying about the extent of pandemics for “your own good” is a tragic pattern that’s frequently repeated in history, and that altruistic motivations (or at least justifications) are commonly presented for why governments do this.
“Think of the children” and moral panic justifications for censorship seems extremely popular.
Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.
Other examples might be public health messaging. E.g. I’ve heard anecdotal claims that it’s a deliberate choice not to emphasize, say, the absolute risk of contracting HIV per instance of unprotected sex with an infected person.
Good question/point! I definitely didn’t mean to imply that EAs were the first people to recognise the idea that true information can sometimes cause harm. If my post did seem to imply that, that’s perhaps a good case study in how easy it is to fall short of my third suggestion, and thus why it’s good to make a conscious effort on that front!
But I’m pretty sure the term “information hazard” was publicly introduced in Bostrom’s 2011 paper. And my sentence preceding that example was “It seems to me that people in the EA community have developed a remarkable number of very useful concepts or terms”.
I said “or terms” partly because it’s hard to say when something is a new concept vs an extension or reformulation of an old one (and the difference may not really matter). I also said that partly because I think new terms (jargon) can be quite valuable even if they merely serve as a shorthand for one specific subset of all the things people sometimes mean by another, more everyday term. E.g.,”dangerous information” and “dangerous knowledge” might sometimes mean (or be taken to mean) “information/knowledge which has a high chance of being net harmful”, whereas “information hazard” just conveys at least a non-trivial chance of at least some harm.
As for whether it was a new concept: the paper provided a detailed treatment of the topic of information hazards, including a taxonomy of different types. I think one could argue that this amounted to introducing the new concept of “information hazards”, which was similar to and built on earlier concepts such as “dangerous information”. (But one could also argue against that, and it might not matter much whether we decide to call it a new concept vs an extension/new version of existing ones.)
All good points!
Some quick self-review thoughts:
I still stand by these points and by the implicit claims that they’re worth stating and that they’re often not adhered to.
On probably >10% of docs I give feedback on, I finish at least one comment with “See also https://forum.effectivealtruism.org/posts/uGt5HfRTYi9xwF6i8/3-suggestions-about-jargon-in-ea ”
I think these points are pretty obvious and probably were already in many people’s heads. I think probably many people could’ve easily written approximately this. If I recall correctly, I wrote it in ~2 hours, just after the thought initially struck me.
So I think this is one of a fair few cases where I’ve added value by plucking a low-hanging fruit that it seems surprising no one else had plucked.
Most of the other examples are my collections, e.g. https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates and the pingbacks from Suggestion: EAs should post more summaries and collections. (I think some examples other than this post are better examples since they probably added more value.)
It seems to me that it should be possible for much more of this sort of thing to be spotted and done by non-me people than currently happens. I’m not sure how best to make that happen. One approach I’ve attempted since March 2020 is just saying this sort of thing a lot and hoping.
(Hopefully obvious caveats: Of course there are already non-me people doing this, but I think there could be more of this. And of course there are also many useful things to do other than just these sorts of quick low-hanging fruit posts.)
The first two points aren’t at all specific to EA, so it’s plausible that those points were already said concisely in an article from outside the EA community and that I could’ve just linkposted that and added the third point. But I didn’t know of such an article and no one has pointed me to one yet, and this was quick enough to write that I still think it made sense to just write it rather than hunt around for existing work first.
I’d suggest
Continuing to build the EA forum wiki as the canonical source of definitions of these terms, with links/citations. Link to this whenever possible.
Adding explicit “disambiguations” to the definitions there, starting with pasting in the ones you mention here