3 suggestions about jargon in EA

Sum­mary and purpose

I sug­gest that effec­tive al­tru­ists should:

  1. Be care­ful to avoid us­ing jar­gon to con­vey some­thing that isn’t what the jar­gon is ac­tu­ally meant to con­vey, and that could be con­veyed well with­out any jar­gon.

    • As ex­am­ples, I’ll dis­cuss mi­suses I’ve seen of the terms ex­is­ten­tial risk and the unilat­er­al­ist’s curse, and the jar­gon-free state­ments that could’ve been used in­stead.

  2. Provide ex­pla­na­tions and/​or hy­per­links to ex­pla­na­tions the first time they use jar­gon.

  3. Be care­ful to avoid im­ply­ing jar­gon or con­cepts origi­nated in EA when they did not.

I’m sure similar sug­ges­tions have been made be­fore, both within and out­side of EA. This post’s pur­pose is to col­lect the sug­ges­tions to­gether in one post that (a) can be linked to, and (b) has this as its sole fo­cus (rather than touch­ing on these sug­ges­tions in pass­ing).

This post is in­tended to provide friendly sug­ges­tions rather than crit­i­cisms. I’ve some­times failed to fol­low these sug­ges­tions my­self.

1. Avoid misuse

The up­side of jar­gon is that it can effi­ciently con­vey a pre­cise and some­times com­plex idea. The down­side is that jar­gon will be un­fa­mil­iar to most peo­ple. I’ve seen in­stances where EAs or EA-al­igned peo­ple have used jar­gon to con­vey some­thing other than what the jar­gon is meant to con­vey. This erodes the up­side of that jar­gon, while also un­nec­es­sar­ily hav­ing that down­side of un­fa­mil­iar­ity. In these in­stances, it would be bet­ter to say what one is try­ing to say with­out jar­gon (or with the differ­ent, ap­pro­pri­ate jar­gon).

Of course, “avoid mi­suse” is a hard prin­ci­ple to dis­agree with—but how do you im­ple­ment it, in this case? I have two con­crete sug­ges­tions (though I’m sure other sug­ges­tions could be made as well):

  • Be­fore us­ing jar­gon, think about whether you’ve ac­tu­ally read the source that in­tro­duced that jar­gon, and/​or the most promi­nent source that used the jar­gon (i.e., the “go-to” refer­ence). If you haven’t, per­haps read that be­fore us­ing the jar­gon. If you read that a long time ago, per­haps dou­ble-check it.

    • I sug­gest this in part be­cause I sus­pect peo­ple of­ten en­counter jar­gon sec­ond-hand, lead­ing to a “tele­phone game” effect.

  • See whether you can say the same idea with­out the jar­gon, at least in your own head. This may help you re­al­ise that you’re un­sure what the jar­gon means. Or it may help you re­al­ise that the idea is easy to con­vey with­out the jar­gon.

I’ll now give two ex­am­ples I’ve come across of the sort of mi­suse I’m talk­ing about.

Ex­is­ten­tial risk

For de­tails, see Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catas­tro­phes.

What the term is meant to re­fer to: The most promi­nent defi­ni­tions of ex­is­ten­tial risk are the fol­low­ing:

An ex­is­ten­tial risk is one that threat­ens the pre­ma­ture ex­tinc­tion of Earth-origi­nat­ing in­tel­li­gent life or the per­ma­nent and dras­tic de­struc­tion of its po­ten­tial for de­sir­able fu­ture de­vel­op­ment (Bostrom, 2012)

And:

An ex­is­ten­tial risk is a risk that threat­ens the de­struc­tion of hu­man­ity’s longterm po­ten­tial (Ord, 2020)

Both au­thors make it clear that this refers to more than just ex­tinc­tion risk. For ex­am­ple, Ord breaks ex­is­ten­tial catas­tro­phes down into three main types: ex­tinc­tion, un­re­cov­er­able col­lapse, and un­re­cov­er­able dystopia.

What the term is some­times mis­tak­enly used for: The term ex­is­ten­tial risk is some­times used when the writer or speaker is ac­tu­ally refer­ring only to ex­tinc­tion risk (e.g., in this post, this pod­cast, and this post). This is a prob­lem be­cause:

  • This makes the state­ments un­nec­es­sar­ily hard to un­der­stand for non-EAs.

  • We could suffer an ex­is­ten­tial catas­tro­phe even if we do not suffer ex­tinc­tion, and it’s im­por­tant to re­main aware of this.

It would be bet­ter for these speak­ers and writ­ers to just say “ex­tinc­tion risk”, as that term is more sharply defined, more widely un­der­stood, and a bet­ter fit for what they’re say­ing than is the term “ex­is­ten­tial risk” (see also Cot­ton-Bar­ratt and Ord).

A sep­a­rate prob­lem is that the term ex­is­ten­tial risk is some­times used when the writer or speaker is ac­tu­ally refer­ring to global catas­trophic risks. This in­vites con­fu­sion and con­cept creep, and should be avoided.

Unilat­er­al­ist’s curse

What the term is meant to re­fer to: Bostrom, Dou­glas, and Sand­berg write:

In some situ­a­tions a num­ber of agents each have the abil­ity to un­der­take an ini­ti­a­tive that would have sig­nifi­cant effects on the oth­ers. Sup­pose that each of these agents is purely mo­ti­vated by an al­tru­is­tic con­cern for the com­mon good. We show that if each agent acts on her own per­sonal judg­ment as to whether the ini­ti­a­tive should be un­der­taken, then the ini­ti­a­tive will be un­der­taken more of­ten than is op­ti­mal.
[...] The unilat­er­al­ist’s curse is closely re­lated to a prob­lem in auc­tion the­ory known as the win­ner’s curse. The win­ner’s curse is the phe­nomenon that the win­ning bid in an auc­tion has a high like­li­hood of be­ing higher than the ac­tual value of the good sold. Each bid­der makes an in­de­pen­dent es­ti­mate and the bid­der with the high­est es­ti­mate out­bids the oth­ers. But if the av­er­age es­ti­mate is likely to be an ac­cu­rate es­ti­mate of the value, then the win­ner over­pays. The larger the num­ber of bid­ders, the more likely it is that at least one of them has over­es­ti­mated the value.

What the term is some­times mis­tak­enly used for: I’ve some­times seen “unilat­er­al­ist’s curse” used to re­fer to the idea that, as the num­ber of peo­ple or small groups ca­pa­ble of caus­ing great harm in­creases, the chances that at least one of them does so in­creases, and may be­come very high. This is be­cause many peo­ple are care­less, many peo­ple are well-in­ten­tioned but mis­taken about what would be benefi­cial, and some peo­ple are mal­i­cious. For ex­am­ple, as biotech­nol­ogy be­comes “democra­tised”, we may face in­creas­ing risks from reck­less cu­ri­os­ity-driven ex­per­i­men­ta­tion, reck­less ex­per­i­men­ta­tion in­tended to benefit so­ciety, and de­liber­ate ter­ror­ism. (See The Vuln­er­a­ble World Hy­poth­e­sis.)

That idea in­deed in­volves the po­ten­tial for large harms from unilat­eral ac­tion. But the unilat­er­al­ist’s curse is more spe­cific: it refers to a par­tic­u­lar rea­son why mis­takes in es­ti­mat­ing the value of unilat­eral ac­tions may lead to well-in­ten­tioned ac­tors fre­quently caus­ing harm. So the curse is rele­vant to harms from peo­ple who are well-in­ten­tioned but mis­taken about what would be benefi­cial, but it is not clearly rele­vant to harms from peo­ple who are just care­less or mal­i­cious.

2. Provide ex­pla­na­tions and/​or links

There is a lot of jar­gon used in EA. Some of it is widely known among EAs. Some of it isn’t. And I doubt any of it is uni­ver­sally known among EAs, es­pe­cially when we con­sider rel­a­tively new EAs.

Ad­di­tion­ally, in most cases, it would be good for our state­ments and writ­ings to also be ac­cessible to peo­ple who aren’t part of the EA com­mu­nity. This is be­cause the vast ma­jor­ity of peo­ple—and even the vast ma­jor­ity of peo­ple ac­tively try­ing to do good—aren’t part of the EA com­mu­nity (see Moss, 2020). (I say “in most cases” be­cause of things like in­for­ma­tion haz­ards.)

There­fore, when first us­ing a par­tic­u­lar piece of jar­gon in a con­ver­sa­tion, post, or what­ever, it will of­ten be valuable to provide a brief ex­pla­na­tion of what it means, and/​or a link to a good source on the topic. This helps peo­ple un­der­stand what you’re say­ing, in­tro­duces them to a (pre­sum­ably) use­ful con­cept and per­haps body of work, and may make them feel more wel­comed and less di­s­ori­en­tated or ex­cluded. It also doesn’t take long to do this, es­pe­cially af­ter the first time you choose a “go-to” link for that con­cept.

3. Avoid in­cor­rectly im­ply­ing that things origi­nated in EA

It seems to me that peo­ple in the EA com­mu­nity have de­vel­oped a re­mark­able num­ber of very use­ful con­cepts or terms. For ex­am­ple, in­for­ma­tion haz­ards, the unilat­er­al­ist’s curse, sur­pris­ing and sus­pi­cious con­ver­gence, and the long re­flec­tion. But this is only a sub­set of the very use­ful con­cepts or terms used in EA. For ex­am­ple, the ideas of com­par­a­tive ad­van­tage, coun­ter­fac­tual im­pact, and moral un­cer­tainty each pre­date the EA move­ment.

It’s im­por­tant to re­mem­ber that many of the con­cepts used in EA origi­nated out­side of it, and to avoid im­ply­ing that a con­cept origi­nated in EA when it didn’t, be­cause do­ing so can:

  • Help us find rele­vant bod­ies of work from out­side EA

  • Help us avoid fal­ling into ar­ro­gance or in­su­lar­ity, or for­get­ting to en­gage with the wealth of valuable knowl­edge and ideas gen­er­ated out­side of EA

  • Help us avoid com­ing across as ar­ro­gant, in­su­lar, or naive

    • For ex­am­ple, I was at an EA event also at­tended by an ex­pe­rienced EA, and by a new­comer with a back­ground in eco­nomics. The ex­pe­rienced EA told the new­comer about a very com­mon con­cept from eco­nomics as if it would be new to them, and said it was a “con­cept from EA”. The new­comer clearly found this strange and off-putting.

(That said, I do think that, even when con­cepts origi­nated out­side of EA, EA has been par­tic­u­larly good at col­lect­ing, fur­ther de­vel­op­ing, and ap­ply­ing them, and that’s of course highly valuable work. My thanks to David Kristoffers­son for high­light­ing that point in con­ver­sa­tion.)

Clos­ing remarks

I hope my mar­shal­ling of these com­mon sug­ges­tions will be use­ful to some peo­ple. Feel free to make ad­di­tional re­lated sug­ges­tions in the com­ments, or to bring up your own pet-peeve mi­suses!