I have some sympathy to this perspective, and suspect you’re totally right about some parts of this.
They misuse jargon like “updating” and “outside view” in an attempt to get their point across, and their interlocutors decide that talking with them is not worth their time.
However, I totally don’t buy this. IMO the concepts of “updating” and “outside view” are important enough and non-quantitative enough that if someone can’t use that jargon correctly after learning it, I’m very skeptical of their ability to contribute intellectually to EA. (Of course, we should explain what those terms mean the first time they run across them.)
For many non-native speakers having a conversation in English is quite cognitively demanding – especially when talking about intellectual topics they just learned about. Even reasonably proficient speakers often struggle to express themselves as clearly as they could in their native language, there is a trade-off between fluent speech and optimal word choice/sentence construction. If given 2x more time, or the chance to write down their thoughts, they would possibly not misuse the jargon to the same degree.
Many people get excited about EA when they first hear about it and read a lot of materials. At this speed of learning retention of specific concepts is often not very good at first – but gets a lot better after a few repetitions.
It’s possible that they would be better off learning and using the concepts in a slower yet more accurate way. Misuse of concepts might be some evidence for them not being the most promising candidates for intellectual contributions. But there seem to be other characteristics that could easily compensate for a sub-optimal-but-good rate of learning (e.g. open-mindedness, good judgment, persistence, creativity).
I think there is a disagreement that gets at the core of the issue.
IMO the concepts of “updating” and “outside view” are important enough and non-quantitative enough that if someone can’t use that jargon correctly after learning it, I’m very skeptical of their ability to contribute intellectually to EA.
The examples you mention are well chosen and get at the core of the issue, which is unnecessary in-group speak.
Updating: this basically means proportionately changing your opinions/worldview with new information.
It’s a neologism, and we’re bastardizing its formal use in Bayesian updating, where it is a term of art for creating a new statistical distribution.
So imagine you’re in France, and trying vibe with some 200 IQ woman who has training in stats. Spouting off a few of these words in a row might annoy or confuse her. They might turn up their high IQ gallic nose and walk away.
If you’re talking to someone 120 IQ dude in China who is really credulous, wants to get into EA, but doesn’t know these words and doesn’t have a background in stats, and they go home and look up what Bayesian updating means, they might think EAs are literally calculating the posterior for their beliefs, and then wonder what prior they are using. The next day, that dude is going to look really “dumb” because they spent 50x more effort than needed and will ask weird questions about how people are doing algebra in their heads.
Outside View: This is another neologism.
This time, it’s not really clear what this word means. This is a problem.
I’ve used it various times in different situations to mean different things. No one ever calls me out on this abuse. Maybe that’s because I speak fast, use big words, or know math stuff, or maybe I just use the word well, but it’s a luxury not everyone has.
Once again, that somewhat smug misuse of language could really annoy or disadvantage a new person to EA, even someone perfectly intelligent.
I have some sympathy to this perspective, and suspect you’re totally right about some parts of this.
However, I totally don’t buy this. IMO the concepts of “updating” and “outside view” are important enough and non-quantitative enough that if someone can’t use that jargon correctly after learning it, I’m very skeptical of their ability to contribute intellectually to EA. (Of course, we should explain what those terms mean the first time they run across them.)
For many non-native speakers having a conversation in English is quite cognitively demanding – especially when talking about intellectual topics they just learned about. Even reasonably proficient speakers often struggle to express themselves as clearly as they could in their native language, there is a trade-off between fluent speech and optimal word choice/sentence construction. If given 2x more time, or the chance to write down their thoughts, they would possibly not misuse the jargon to the same degree.
Many people get excited about EA when they first hear about it and read a lot of materials. At this speed of learning retention of specific concepts is often not very good at first – but gets a lot better after a few repetitions.
It’s possible that they would be better off learning and using the concepts in a slower yet more accurate way. Misuse of concepts might be some evidence for them not being the most promising candidates for intellectual contributions. But there seem to be other characteristics that could easily compensate for a sub-optimal-but-good rate of learning (e.g. open-mindedness, good judgment, persistence, creativity).
I think there is a disagreement that gets at the core of the issue.
The examples you mention are well chosen and get at the core of the issue, which is unnecessary in-group speak.
Updating: this basically means proportionately changing your opinions/worldview with new information.
It’s a neologism, and we’re bastardizing its formal use in Bayesian updating, where it is a term of art for creating a new statistical distribution.
So imagine you’re in France, and trying vibe with some 200 IQ woman who has training in stats. Spouting off a few of these words in a row might annoy or confuse her. They might turn up their high IQ gallic nose and walk away.
If you’re talking to someone 120 IQ dude in China who is really credulous, wants to get into EA, but doesn’t know these words and doesn’t have a background in stats, and they go home and look up what Bayesian updating means, they might think EAs are literally calculating the posterior for their beliefs, and then wonder what prior they are using. The next day, that dude is going to look really “dumb” because they spent 50x more effort than needed and will ask weird questions about how people are doing algebra in their heads.
Outside View: This is another neologism.
This time, it’s not really clear what this word means. This is a problem.
I’ve used it various times in different situations to mean different things. No one ever calls me out on this abuse. Maybe that’s because I speak fast, use big words, or know math stuff, or maybe I just use the word well, but it’s a luxury not everyone has.
Once again, that somewhat smug misuse of language could really annoy or disadvantage a new person to EA, even someone perfectly intelligent.