Perhaps the real argument I’m making is that “don’t use weird jargon (outside of lesswrong)” should beanother principle.
Seems like an obviously bad rule to me. “Don’t use weird jargon anywhere in the world except LessWrong” is a way stronger claim than “Don’t use weird jargon in an adversarial debate where you’re trying to rhetorically out-manipulate a dishonest creationist”.
(This proposal also strikes me as weirdly minor compared to the other rules. Partly because it’s covered to some degree by “Reducibility” already, which encourages people to only use jargon if they’re willing and able to paraphrase it away or explain it on request.)
“On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.”
to
“”I believe that trying out these techniques will probably improve the effectiveness of EA”.”
Seems like a bad paraphrase to me, in a few ways:
“On my model of EA and of the larger world” is actually doing some important work here. The thing I’m trying to concisely gesture at is that I have a ton of complicated background beliefs about the world, and also about how EA should interface with the wider world, that make me much more confident that guidelines like the one in the OP are good ones.
I actually want to signpost all of that pretty clearly, so people know they can follow up and argue with me about the world and about EA if they have different beliefs/models about how EA can do the most good.
“X will probably improve Y” is a lot weaker than “X is one of the best ways to improve Y”.
“Improve the effectiveness of EA” is very vague, and (to my eye) makes it sound like I think these guidelines are useful for things like “making EAs more productive at doing the things they’re already trying to do”.
I do think the guidelines would have that effect, but I also think that they’d help people pick better cause areas and interventions to work on, by making people’s reasoning processes and discussions clearer, more substantive, and more cruxy. You could say that this is also increasing our “effectiveness” (especially in EA settings, where “effective” takes on some vague jargoniness of its own), but connotationally it would still be misleading, especially for EAs who are using “effective” in the normal colloquial sense.
I think overly-jargony, needlessly complicated text is bad. But if “On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.” crosses your bar for “too jargony” and “too complicated”, I think you’re setting your bar waaaay too low for the EA Forum audience.
I think the point I’m trying to make is that you need to adapt your language and norms for the audience you are talking to, which in the case of EA will often be people who are non-rationalist or have never even heard of rationalism.
If you go talking to an expert in nuclear policy and start talking about “inferential distances” and linking lesswrong blogposts to them, you are impeding understanding and communication, not increasing it. Your language may be more precise and accurate for someone else in your subculture, but for people outside it, it can be confusing and alienating.
Of course people in the EA forum can read and understand your sentence. But the extra length impedes readability and communication. I don’t think the extra things you signal with it add enough to overcome that. It’s not super bad or anything, but the tendency for unclear and overly verbose language is a clear problem I see when rationalists communicate in other forums.
Seems like an obviously bad rule to me. “Don’t use weird jargon anywhere in the world except LessWrong” is a way stronger claim than “Don’t use weird jargon in an adversarial debate where you’re trying to rhetorically out-manipulate a dishonest creationist”.
(This proposal also strikes me as weirdly minor compared to the other rules. Partly because it’s covered to some degree by “Reducibility” already, which encourages people to only use jargon if they’re willing and able to paraphrase it away or explain it on request.)
Seems like a bad paraphrase to me, in a few ways:
“On my model of EA and of the larger world” is actually doing some important work here. The thing I’m trying to concisely gesture at is that I have a ton of complicated background beliefs about the world, and also about how EA should interface with the wider world, that make me much more confident that guidelines like the one in the OP are good ones.
I actually want to signpost all of that pretty clearly, so people know they can follow up and argue with me about the world and about EA if they have different beliefs/models about how EA can do the most good.
“X will probably improve Y” is a lot weaker than “X is one of the best ways to improve Y”.
“Improve the effectiveness of EA” is very vague, and (to my eye) makes it sound like I think these guidelines are useful for things like “making EAs more productive at doing the things they’re already trying to do”.
I do think the guidelines would have that effect, but I also think that they’d help people pick better cause areas and interventions to work on, by making people’s reasoning processes and discussions clearer, more substantive, and more cruxy. You could say that this is also increasing our “effectiveness” (especially in EA settings, where “effective” takes on some vague jargoniness of its own), but connotationally it would still be misleading, especially for EAs who are using “effective” in the normal colloquial sense.
I think overly-jargony, needlessly complicated text is bad. But if “On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.” crosses your bar for “too jargony” and “too complicated”, I think you’re setting your bar waaaay too low for the EA Forum audience.
I think the point I’m trying to make is that you need to adapt your language and norms for the audience you are talking to, which in the case of EA will often be people who are non-rationalist or have never even heard of rationalism.
If you go talking to an expert in nuclear policy and start talking about “inferential distances” and linking lesswrong blogposts to them, you are impeding understanding and communication, not increasing it. Your language may be more precise and accurate for someone else in your subculture, but for people outside it, it can be confusing and alienating.
Of course people in the EA forum can read and understand your sentence. But the extra length impedes readability and communication. I don’t think the extra things you signal with it add enough to overcome that. It’s not super bad or anything, but the tendency for unclear and overly verbose language is a clear problem I see when rationalists communicate in other forums.