The main disadvantage of rationalist discourse is that it can do pretty badly when talking to non-rationalists.
I think posts on LessWrong often do badly on that metric, because they aren’t trying to appeal to (e.g.) the voting public. Which seems to me like the right call on LW’s part.
I think the norms themselves will actually help you do better at communicating with people outside your bubble, because a lot of them are common-sense (but not commonly applied!) ideas for bridging the gap between yourself and people who don’t have the same background as you.
Be internally clear about what your primary conversational goals are, and focus on the things that you expect to help most with those goals, rather than getting side-tracked every time a shiny tangent catches your eye.
Treat people you’re talking to with respect, err on the side of assuming good faith, and try to understand their perspective.
Try to use simple, concrete language.
Make your goals and preferences explicit in relatively chill descriptive language, rather than relying on outrage, shaming, etc. to try to pressure people into agreeing with you.
Flag when there are ways to test a claim, and consider testing it if it’s easy. (E.g., “oh wait, let’s google that.”)
Etc.
I.e., these norms play well with multicultural, diverse groups of people.
I do think they’re better for cooperative dynamics than for heavily adversarial ones: if you’re trying to understand the perspective of someone else, have them better understand your perspective, treat the other person like a peer, respect their autonomy and agency, learn more about the world in collaboration with them, etc., then I think these norms are spot-on.
If you’re trying to manipulate them or sneak ideas past their defenses, then I don’t think these norms are ideal (though I think that’s generally a bad thing to do, and I think EA will do a lot better and be a healthier environment if it moves heavily away from that approach to discourse).
If you’re interacting with someone else who’s acting adversarially toward you, then I think these norms aren’t bad but they have their emphasis in the wrong place. Like, “Goodwill” leaves room for noticing bad actors and responding accordingly (see footnote 5), but if I were specifically giving people advice for dealing with bad actors, I don’t think any version of “Goodwill” would go on my top ten list of tips and tricks to employ.
Instead, these norms are aimed at a target more like “a healthy intellectual community that’s trying to collaboratively figure out what’s true (that can also respond well when bad actors show up in its spaces, but that’s more like a top-ten desideratum rather than being the #1 desideratum)”.
“Trick an audience of laypeople into believing your views faster than a creationist can trick that audience into believing their views” is definitely not what these discourse norms are optimized for helping with, and I think that’s to their credit. Basically zero EAs should be focusing on a goal like that, IMO, and if it did make sense for a rare EA to skill up in that, they definitely shouldn’t import those norms and habits into discussions with other EAs.
On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.
“Adopt weird jargon”, notably, isn’t one of the items on the list.
I liked Nate’s argument for weird jargon enough that I included it in footnote 6 (while mainly looking for the explanation of “my model is...”), but IMO you can follow all ten items on the list without using any weird jargon. Though giving probabilities to things while trying to be calibrated does inherently have a lot of the properties of jargon: people who are used to “90% confidence” meaning something a lot fuzzier (that turns out to be wrong quite regularly) may be confused initially when they realize that you literally mean there’s a 9-in-10 chance you’re right.
The jargon from this post I especially think EAs should use routinely is:
symmetric vs. asymmetric weapons
crux
(rationalist) taboo
local validity
inferential gap / inferential distance
Ideological Turing Test (ITT)
Typical Mind Fallacy
terms related to Bayesian probability: probability, priors, retrodict, etc.
Where some of these (“crux”, “probability”) are common English words, but I’m proposing using them in a narrower and more precise way.
To be clear, I think that most of the points are good, and thank you for writing this up. Perhaps the real argument I’m making is that “don’t use weird jargon (outside of lesswrong)” should beanother principle.
For example, I could translate the sentence:
On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.
to
“I believe that trying out these techniques will probably improve the effectiveness of EA”.
I think the latter statement is straightforwardly better. It may sacrifice a tiny bit of precision, but it replaces it with readability and clarity that allows a much greater portion of the population to engage. (This is not a knock on you, I do this kind of thing all the time as well).
To go through the list of jargon, I think the ideas behind the jargon are good, but I think people should be asking themselves “is this jargon actually necessary/clarifying?” before using them. For example, I think “typical mind fallacy” is a great term because it’s immediately understandable to a newcomer. You don’t have to read through an attached blogpost to understand the point that is being made. “inferential gaps”, on the other hand, is a fairly unintuitive term, that in most cases would be better served by explaining what you mean in plain english, rather then sending people off into link homework.
Perhaps the real argument I’m making is that “don’t use weird jargon (outside of lesswrong)” should beanother principle.
Seems like an obviously bad rule to me. “Don’t use weird jargon anywhere in the world except LessWrong” is a way stronger claim than “Don’t use weird jargon in an adversarial debate where you’re trying to rhetorically out-manipulate a dishonest creationist”.
(This proposal also strikes me as weirdly minor compared to the other rules. Partly because it’s covered to some degree by “Reducibility” already, which encourages people to only use jargon if they’re willing and able to paraphrase it away or explain it on request.)
“On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.”
to
“”I believe that trying out these techniques will probably improve the effectiveness of EA”.”
Seems like a bad paraphrase to me, in a few ways:
“On my model of EA and of the larger world” is actually doing some important work here. The thing I’m trying to concisely gesture at is that I have a ton of complicated background beliefs about the world, and also about how EA should interface with the wider world, that make me much more confident that guidelines like the one in the OP are good ones.
I actually want to signpost all of that pretty clearly, so people know they can follow up and argue with me about the world and about EA if they have different beliefs/models about how EA can do the most good.
“X will probably improve Y” is a lot weaker than “X is one of the best ways to improve Y”.
“Improve the effectiveness of EA” is very vague, and (to my eye) makes it sound like I think these guidelines are useful for things like “making EAs more productive at doing the things they’re already trying to do”.
I do think the guidelines would have that effect, but I also think that they’d help people pick better cause areas and interventions to work on, by making people’s reasoning processes and discussions clearer, more substantive, and more cruxy. You could say that this is also increasing our “effectiveness” (especially in EA settings, where “effective” takes on some vague jargoniness of its own), but connotationally it would still be misleading, especially for EAs who are using “effective” in the normal colloquial sense.
I think overly-jargony, needlessly complicated text is bad. But if “On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.” crosses your bar for “too jargony” and “too complicated”, I think you’re setting your bar waaaay too low for the EA Forum audience.
I think the point I’m trying to make is that you need to adapt your language and norms for the audience you are talking to, which in the case of EA will often be people who are non-rationalist or have never even heard of rationalism.
If you go talking to an expert in nuclear policy and start talking about “inferential distances” and linking lesswrong blogposts to them, you are impeding understanding and communication, not increasing it. Your language may be more precise and accurate for someone else in your subculture, but for people outside it, it can be confusing and alienating.
Of course people in the EA forum can read and understand your sentence. But the extra length impedes readability and communication. I don’t think the extra things you signal with it add enough to overcome that. It’s not super bad or anything, but the tendency for unclear and overly verbose language is a clear problem I see when rationalists communicate in other forums.
My subjective feeling is that all of the terms on this list make conversations less clear, more exhausting, and broadly unpleasant.
You could say that’s unsurprising, coming from a person who deliberately avoids LessWrong. But then I invite you to think what percentage of people [you might talk to?] would enjoy LessWrong, and what biases you’d get from only talking with people from that group.
Communication norms aren’t useful if they increase fidelity but decrease people’s willingness to participate in conversation. (Relevant xkcd)
My subjective feeling is that all of the terms on this list make conversations less clear, more exhausting, and broadly unpleasant.
Why? Picking an example that seems especially innocuous to me: why do you feel like the word “probability” (used to refer to degrees of belief strength) makes conversations “less clear”? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
You could say that’s unsurprising, coming from a person who deliberately avoids LessWrong.
I think people who dislike LW should also steal useful terms and habits of thought like these, if any seem useful. In general, a pretty core mental motion in my experience is: if someone you dislike does a thing that works, steal that technique from them and get value from it yourself.
Don’t handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike. Say “fuck the source” and then grab whatever’s useful and ditch the rest.
If the only problem were “this concept is good but I don’t want to use a word that LessWrong uses”, I’d just suggest coming up with a new label for the same concept and using that. (The labels aren’t the important part.)
why do you feel like the word “probability” (used to refer to degrees of belief strength) makes conversations “less clear”? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
Because there’s usually no real correspondence between probabilities used in this specific sense, and reality. On the other hand, it adds details and thus makes it harder to focus on the parts that are real. Worse, it creates a false sense of scientificness and reliability, obscuring the truth.
I’m a mathematician so obviously I find probability and Bayesianism useful. But this kind of usage is mostly based on the notion that the speaker and the listener can do Bayesian updates in their heads regarding their beliefs about the world. I think this notion is false (or at least unfounded), but even if it were true for people currently practising it, it’s not true for the general population.
I said “mostly” and “usually” because I do rarely find it useful—this week I told my boss there was a 70% I’d come to work the next day—but this both happens extremely seldom, and in a context where it’s clear to both sides that the specific number is carries very little meaning.
Don’t handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike.
When I talked about avoiding LessWrong what I meant is that I don’t represent the average EA, but rather am in a group selected for not liking the ideas you listed—but that I don’t think that matters much if you’re advocating for the general public to use them.
When I say that there’s a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.
(I checked my calibration online just now.)
It’s not some impossible skill to get decent enough calibration.
I think identifying common modes of inference (e.g., deductive, inductive, analogy) can be helpful, if argument analysis takes place. Retrodiction is used to describe a stage of retroductive (abductive) reasoning, and so has value outside a Bayesian analysis.
If there’s ever an equivalent in wider language for what you’re discussing here (for example, “important premise” for “crux”), consider using the more common form rather than specialized jargon. For example, I find EA use of “counterfactual” to confuse me about the meaning of what I think are discussions of necessary conditions, whereas counterfactual statements are, to me, false statements, relevant in a discussion of hypothetical events that do not occur. Many times I wanted to discuss counterfactuals but worried that the conversation with EA’s would lead to misunderstandings, as if my analysis were to explore necessary conditions for some action or consequence, when that was not the intent.
The “typical mind fallacy” is interesting. On the one hand, I think some inferences taking the form of shared values or experience are fallacious. On the other hand, some typical inferences about similarities between people are reliable and we depend on them. For example, that people dislike insults. A common word starting with ‘n’ has a special case, but is mostly taken as a deeply unwelcome insult, our default is to treat that knowledge as true. We rely on default (defeasible) reasoning when we employ those inferences, and add nuance or admit special cases for their exceptions. In the social world, the “typical mind fallacy” has some strong caveats.
I think posts on LessWrong often do badly on that metric, because they aren’t trying to appeal to (e.g.) the voting public. Which seems to me like the right call on LW’s part.
I think the norms themselves will actually help you do better at communicating with people outside your bubble, because a lot of them are common-sense (but not commonly applied!) ideas for bridging the gap between yourself and people who don’t have the same background as you.
Be internally clear about what your primary conversational goals are, and focus on the things that you expect to help most with those goals, rather than getting side-tracked every time a shiny tangent catches your eye.
Treat people you’re talking to with respect, err on the side of assuming good faith, and try to understand their perspective.
Try to use simple, concrete language.
Make your goals and preferences explicit in relatively chill descriptive language, rather than relying on outrage, shaming, etc. to try to pressure people into agreeing with you.
Flag when there are ways to test a claim, and consider testing it if it’s easy. (E.g., “oh wait, let’s google that.”)
Etc.
I.e., these norms play well with multicultural, diverse groups of people.
I do think they’re better for cooperative dynamics than for heavily adversarial ones: if you’re trying to understand the perspective of someone else, have them better understand your perspective, treat the other person like a peer, respect their autonomy and agency, learn more about the world in collaboration with them, etc., then I think these norms are spot-on.
If you’re trying to manipulate them or sneak ideas past their defenses, then I don’t think these norms are ideal (though I think that’s generally a bad thing to do, and I think EA will do a lot better and be a healthier environment if it moves heavily away from that approach to discourse).
If you’re interacting with someone else who’s acting adversarially toward you, then I think these norms aren’t bad but they have their emphasis in the wrong place. Like, “Goodwill” leaves room for noticing bad actors and responding accordingly (see footnote 5), but if I were specifically giving people advice for dealing with bad actors, I don’t think any version of “Goodwill” would go on my top ten list of tips and tricks to employ.
Instead, these norms are aimed at a target more like “a healthy intellectual community that’s trying to collaboratively figure out what’s true (that can also respond well when bad actors show up in its spaces, but that’s more like a top-ten desideratum rather than being the #1 desideratum)”.
“Trick an audience of laypeople into believing your views faster than a creationist can trick that audience into believing their views” is definitely not what these discourse norms are optimized for helping with, and I think that’s to their credit. Basically zero EAs should be focusing on a goal like that, IMO, and if it did make sense for a rare EA to skill up in that, they definitely shouldn’t import those norms and habits into discussions with other EAs.
On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.
“Adopt weird jargon”, notably, isn’t one of the items on the list.
I liked Nate’s argument for weird jargon enough that I included it in footnote 6 (while mainly looking for the explanation of “my model is...”), but IMO you can follow all ten items on the list without using any weird jargon. Though giving probabilities to things while trying to be calibrated does inherently have a lot of the properties of jargon: people who are used to “90% confidence” meaning something a lot fuzzier (that turns out to be wrong quite regularly) may be confused initially when they realize that you literally mean there’s a 9-in-10 chance you’re right.
The jargon from this post I especially think EAs should use routinely is:
symmetric vs. asymmetric weapons
crux
(rationalist) taboo
local validity
inferential gap / inferential distance
Ideological Turing Test (ITT)
Typical Mind Fallacy
terms related to Bayesian probability: probability, priors, retrodict, etc.
Where some of these (“crux”, “probability”) are common English words, but I’m proposing using them in a narrower and more precise way.
To be clear, I think that most of the points are good, and thank you for writing this up. Perhaps the real argument I’m making is that “don’t use weird jargon (outside of lesswrong)” should be another principle.
For example, I could translate the sentence:
to
I think the latter statement is straightforwardly better. It may sacrifice a tiny bit of precision, but it replaces it with readability and clarity that allows a much greater portion of the population to engage. (This is not a knock on you, I do this kind of thing all the time as well).
To go through the list of jargon, I think the ideas behind the jargon are good, but I think people should be asking themselves “is this jargon actually necessary/clarifying?” before using them. For example, I think “typical mind fallacy” is a great term because it’s immediately understandable to a newcomer. You don’t have to read through an attached blogpost to understand the point that is being made. “inferential gaps”, on the other hand, is a fairly unintuitive term, that in most cases would be better served by explaining what you mean in plain english, rather then sending people off into link homework.
Seems like an obviously bad rule to me. “Don’t use weird jargon anywhere in the world except LessWrong” is a way stronger claim than “Don’t use weird jargon in an adversarial debate where you’re trying to rhetorically out-manipulate a dishonest creationist”.
(This proposal also strikes me as weirdly minor compared to the other rules. Partly because it’s covered to some degree by “Reducibility” already, which encourages people to only use jargon if they’re willing and able to paraphrase it away or explain it on request.)
Seems like a bad paraphrase to me, in a few ways:
“On my model of EA and of the larger world” is actually doing some important work here. The thing I’m trying to concisely gesture at is that I have a ton of complicated background beliefs about the world, and also about how EA should interface with the wider world, that make me much more confident that guidelines like the one in the OP are good ones.
I actually want to signpost all of that pretty clearly, so people know they can follow up and argue with me about the world and about EA if they have different beliefs/models about how EA can do the most good.
“X will probably improve Y” is a lot weaker than “X is one of the best ways to improve Y”.
“Improve the effectiveness of EA” is very vague, and (to my eye) makes it sound like I think these guidelines are useful for things like “making EAs more productive at doing the things they’re already trying to do”.
I do think the guidelines would have that effect, but I also think that they’d help people pick better cause areas and interventions to work on, by making people’s reasoning processes and discussions clearer, more substantive, and more cruxy. You could say that this is also increasing our “effectiveness” (especially in EA settings, where “effective” takes on some vague jargoniness of its own), but connotationally it would still be misleading, especially for EAs who are using “effective” in the normal colloquial sense.
I think overly-jargony, needlessly complicated text is bad. But if “On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.” crosses your bar for “too jargony” and “too complicated”, I think you’re setting your bar waaaay too low for the EA Forum audience.
I think the point I’m trying to make is that you need to adapt your language and norms for the audience you are talking to, which in the case of EA will often be people who are non-rationalist or have never even heard of rationalism.
If you go talking to an expert in nuclear policy and start talking about “inferential distances” and linking lesswrong blogposts to them, you are impeding understanding and communication, not increasing it. Your language may be more precise and accurate for someone else in your subculture, but for people outside it, it can be confusing and alienating.
Of course people in the EA forum can read and understand your sentence. But the extra length impedes readability and communication. I don’t think the extra things you signal with it add enough to overcome that. It’s not super bad or anything, but the tendency for unclear and overly verbose language is a clear problem I see when rationalists communicate in other forums.
My subjective feeling is that all of the terms on this list make conversations less clear, more exhausting, and broadly unpleasant.
You could say that’s unsurprising, coming from a person who deliberately avoids LessWrong. But then I invite you to think what percentage of people [you might talk to?] would enjoy LessWrong, and what biases you’d get from only talking with people from that group.
Communication norms aren’t useful if they increase fidelity but decrease people’s willingness to participate in conversation. (Relevant xkcd)
Why? Picking an example that seems especially innocuous to me: why do you feel like the word “probability” (used to refer to degrees of belief strength) makes conversations “less clear”? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
I think people who dislike LW should also steal useful terms and habits of thought like these, if any seem useful. In general, a pretty core mental motion in my experience is: if someone you dislike does a thing that works, steal that technique from them and get value from it yourself.
Don’t handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike. Say “fuck the source” and then grab whatever’s useful and ditch the rest.
If the only problem were “this concept is good but I don’t want to use a word that LessWrong uses”, I’d just suggest coming up with a new label for the same concept and using that. (The labels aren’t the important part.)
Because there’s usually no real correspondence between probabilities used in this specific sense, and reality. On the other hand, it adds details and thus makes it harder to focus on the parts that are real. Worse, it creates a false sense of scientificness and reliability, obscuring the truth.
I’m a mathematician so obviously I find probability and Bayesianism useful. But this kind of usage is mostly based on the notion that the speaker and the listener can do Bayesian updates in their heads regarding their beliefs about the world. I think this notion is false (or at least unfounded), but even if it were true for people currently practising it, it’s not true for the general population.
I said “mostly” and “usually” because I do rarely find it useful—this week I told my boss there was a 70% I’d come to work the next day—but this both happens extremely seldom, and in a context where it’s clear to both sides that the specific number is carries very little meaning.
When I talked about avoiding LessWrong what I meant is that I don’t represent the average EA, but rather am in a group selected for not liking the ideas you listed—but that I don’t think that matters much if you’re advocating for the general public to use them.
When I say that there’s a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.
(I checked my calibration online just now.)
It’s not some impossible skill to get decent enough calibration.
I think identifying common modes of inference (e.g., deductive, inductive, analogy) can be helpful, if argument analysis takes place. Retrodiction is used to describe a stage of retroductive (abductive) reasoning, and so has value outside a Bayesian analysis.
If there’s ever an equivalent in wider language for what you’re discussing here (for example, “important premise” for “crux”), consider using the more common form rather than specialized jargon. For example, I find EA use of “counterfactual” to confuse me about the meaning of what I think are discussions of necessary conditions, whereas counterfactual statements are, to me, false statements, relevant in a discussion of hypothetical events that do not occur. Many times I wanted to discuss counterfactuals but worried that the conversation with EA’s would lead to misunderstandings, as if my analysis were to explore necessary conditions for some action or consequence, when that was not the intent.
The “typical mind fallacy” is interesting. On the one hand, I think some inferences taking the form of shared values or experience are fallacious. On the other hand, some typical inferences about similarities between people are reliable and we depend on them. For example, that people dislike insults. A common word starting with ‘n’ has a special case, but is mostly taken as a deeply unwelcome insult, our default is to treat that knowledge as true. We rely on default (defeasible) reasoning when we employ those inferences, and add nuance or admit special cases for their exceptions. In the social world, the “typical mind fallacy” has some strong caveats.