My subjective feeling is that all of the terms on this list make conversations less clear, more exhausting, and broadly unpleasant.
Why? Picking an example that seems especially innocuous to me: why do you feel like the word “probability” (used to refer to degrees of belief strength) makes conversations “less clear”? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
You could say that’s unsurprising, coming from a person who deliberately avoids LessWrong.
I think people who dislike LW should also steal useful terms and habits of thought like these, if any seem useful. In general, a pretty core mental motion in my experience is: if someone you dislike does a thing that works, steal that technique from them and get value from it yourself.
Don’t handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike. Say “fuck the source” and then grab whatever’s useful and ditch the rest.
If the only problem were “this concept is good but I don’t want to use a word that LessWrong uses”, I’d just suggest coming up with a new label for the same concept and using that. (The labels aren’t the important part.)
why do you feel like the word “probability” (used to refer to degrees of belief strength) makes conversations “less clear”? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
Because there’s usually no real correspondence between probabilities used in this specific sense, and reality. On the other hand, it adds details and thus makes it harder to focus on the parts that are real. Worse, it creates a false sense of scientificness and reliability, obscuring the truth.
I’m a mathematician so obviously I find probability and Bayesianism useful. But this kind of usage is mostly based on the notion that the speaker and the listener can do Bayesian updates in their heads regarding their beliefs about the world. I think this notion is false (or at least unfounded), but even if it were true for people currently practising it, it’s not true for the general population.
I said “mostly” and “usually” because I do rarely find it useful—this week I told my boss there was a 70% I’d come to work the next day—but this both happens extremely seldom, and in a context where it’s clear to both sides that the specific number is carries very little meaning.
Don’t handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike.
When I talked about avoiding LessWrong what I meant is that I don’t represent the average EA, but rather am in a group selected for not liking the ideas you listed—but that I don’t think that matters much if you’re advocating for the general public to use them.
When I say that there’s a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.
(I checked my calibration online just now.)
It’s not some impossible skill to get decent enough calibration.
Why? Picking an example that seems especially innocuous to me: why do you feel like the word “probability” (used to refer to degrees of belief strength) makes conversations “less clear”? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
I think people who dislike LW should also steal useful terms and habits of thought like these, if any seem useful. In general, a pretty core mental motion in my experience is: if someone you dislike does a thing that works, steal that technique from them and get value from it yourself.
Don’t handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike. Say “fuck the source” and then grab whatever’s useful and ditch the rest.
If the only problem were “this concept is good but I don’t want to use a word that LessWrong uses”, I’d just suggest coming up with a new label for the same concept and using that. (The labels aren’t the important part.)
Because there’s usually no real correspondence between probabilities used in this specific sense, and reality. On the other hand, it adds details and thus makes it harder to focus on the parts that are real. Worse, it creates a false sense of scientificness and reliability, obscuring the truth.
I’m a mathematician so obviously I find probability and Bayesianism useful. But this kind of usage is mostly based on the notion that the speaker and the listener can do Bayesian updates in their heads regarding their beliefs about the world. I think this notion is false (or at least unfounded), but even if it were true for people currently practising it, it’s not true for the general population.
I said “mostly” and “usually” because I do rarely find it useful—this week I told my boss there was a 70% I’d come to work the next day—but this both happens extremely seldom, and in a context where it’s clear to both sides that the specific number is carries very little meaning.
When I talked about avoiding LessWrong what I meant is that I don’t represent the average EA, but rather am in a group selected for not liking the ideas you listed—but that I don’t think that matters much if you’re advocating for the general public to use them.
When I say that there’s a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.
(I checked my calibration online just now.)
It’s not some impossible skill to get decent enough calibration.