Why defensive writing is bad for community epistemics

I see this as a big problem that makes communication and learning really inefficient. In a culture where defensive writing is the norm, readers learn to expect that their reputation is greatly at stake if they were to publish anything themselves.

I advocate writing primarily based on what you think will help the reader. I claim this is an act of introspection that’s harder than what it seems at first. I’m not trying to judge anyone here. I find this hard myself, so I hope to help others notice what I’ve noticed in myself.[1]

TL;DR: A summary is hard, but: You may be doing readers a disservice by not being aware of when you’re optimising your writing purely for helping yourself vs at-least-partially helping readers as well. Secondly, readers should learn to interpret charitably, lest they perpetuate an inefficient and harmfwl culture of communication.

Definitions

Self-centered writing is when you optimise your writing based on how it reflects on your character, instead of on your expectation of what will help your readers.

Defensive writing is a subcategory of the above, where you’re optimising for making sure no one will end up having a bad impression of you.

Judgmental reading is when you optimise your reading for making inferences about the author rather than just trying to learn what you can learn from the content.

Naturally, these aren’t mutually exclusive, and you can optimise for more than one thing at once.[2]

Takeaways

  1. A culture of defensive writing and judgmental reading makes communication really inefficient, and makes it especially scary for newcomers to write anything. Actually, it makes it scary for everyone.

  2. There’s a difference between trying to make your statements safe to defer to (minimsing false-positives), vs not directly optimising for that e.g. because you’re just sharing tools that readers can evaluate for themselves (minimising false-negatives). Where appropriate, writers should be upfront about when they’re doing what.

    1. As an example of this, I’m not optimising this post for being safe to defer to. I take no responsibility for whatever nonsense of mine you end up actually believing. :p

  3. You are not being harmed when someone, according to you, uses insufficiently humble language. Downvoting them for it is tantamount to bullying someone for harmless self-expression.


What does a good epistemic community look like?

In my opinion,[3] an informed approach to this is multidisciplinary and should ideally draw on wisdom from e.g. social epistemology, game theory, metascience, economics of science, graph theory, rationality, several fields in psychology, and can be usefwly supplemented with insights from distributed & parallel computing, evolutionary biology, and more. There have also been relevant discussions on LessWrong over the years.

I’m telling you this because the first principle of a good epistemic community is:

Community members should judge practices based on whether the judgment, when universalised, will lead to better or worse incentives in the community.

And if we’re not aware that there even exists a depth of research on what norms a community can try to encourage in order to improve its epistemic health, then we might have insufficient humility and forget to question our judgments. I’m not saying we should be stifled by uncertainty, but I am advocating that we at least think twice about how to encourage positive norms, and not rely too much on cached sensibilities.

I’ll summarise some of what I think are some of the most basic problems.

1) We judge people much too harshly for what they don’t know

Remember, this isn’t an academic prestige contest, and the only thing that matters is whether we have the knowledge we need in order to do the best that we can. I’m not talking about how we should care about each others’ feelings more and pretend that we’re all equally competent and know an equal amount of stuff. No, I’m saying that if we have a habit of judging people for what they don’t know, we’ll be incentivised to waste time learning all the same things, and we lose out on diversity of knowledge.

And who are you to judge whether actually knowing whatever is truly essential for what they are trying to do, given that they are the one who’s trying to do it?

“His ignorance was as remarkable as his knowledge. Of contemporary literature, philosophy and politics he appeared to know next to nothing.”

A community where people reveal their uncertainty, ask dumb questions, and reward each other for the courage and sagacity to admit their ignorance, will be more effective than this.

2) Celebrate innocence & self-confidence; stop rewarding modesty

This one is a little sad but… when I see someone declare that they have important things to teach the rest of the community, or that they think they can do more good than Peter Singer, I may not always agree that they have a chance—but I’ll tear up and my heart will beam with joy and empathy for them, because I know they’re sacrificing social comfort and perceived humility in order to virtuously reveal their true beliefs.

There are too many reasons for this, but the first is fundamentally about being a kind and decent human being. When a kid eagerly comes up to you and offers to show you a painting they’re proud of, will you not be happy for the kid? Or will you harshly tell them to fall back in line and never call attention to themselves ever again? Or maybe you only think it’s “inappropriate” when it’s an upstanding adult who does it. After all, they’ve outgrown their right to be innocent. No. How can a community so full of kind people be so eager to put people down for unfolding their unconquered personalities openly into the world?

But if you insist on stealing the sunlight from their smiles… maybe arguments about effectiveness will convince you instead. It ties back to how people, in ordinary cases, vastly overestimate the downsides of not knowing particular things or not having “credentials”, and underestimate the advantage of learning to think independently as early as possible.

It’s important that people be allowed to try ambitious things without feeling like they need to make a great production out of defending their hero license.

When we have a community in which people are afraid to stick out, and they’re always timidly questioning whether they know enough to start actually Doing The Thing, we’re losing out on an enormous amount of important projects, motivation, ambition, and untroubled mental health. It may not feel as urgent because we can’t see the impact we miss out on.

3) Charitable interpretations by default

When you interpret someone with charity, you start out with the tentative assumption that they are a decent human being that means well, and means something coherent that you can learn from if you listen attentively. And you keep curiously looking for reasons not to abandon these assumptions.

EA is not perfect, but there is something different going on here than in the other parts of the world I experience. I don’t know about you, but I come from a world in which “what’s in it for me?”, “it’s not my responsibility”, and careless exploitation is the norm. If my intuitions about what people are likely to intend have been trained on a world such as this, then I’m not going to be very charitable.

Steelmanning is the act of taking a view, or opinion, or argument and constructing the strongest possible version of it. It is the opposite of strawmanning.

The EA forum is not perfect, but it is different. If you’re bringing with you the same reflexive heuristics while you’re reading the forum, and you don’t put in effort to adjust to the new distribution of actual intentions, then you risk incrementally regressing the forum down to your expectations.

If writers have to spend five paragraphs on disclaimers and clarification just to make sure we don’t accuse them of nazism, well, we’ve wasted everyone’s time, and we’ve lost a hundred good writers who didn’t want to take the risk in the first place.

But it’s worse, because if we already have the norm of constantly suspecting nazism or whatever, then the first writer to not correct for that misunderstanding is immediately going to look suspicious. This is what’s so vicious about refusing to interpret with a little more recursive wisdom. If you have an equilibrium of both expecting and writing disclaimers about X, it self-perpetuates regardless of whether anyone actually means or supports X.

You might point out that disclaiming X isn’t really a big efficiency loss, but X is just the tip of an iceberg.

Part of the problem is just laziness and impatience. If people have trained their intuitions on the level of discourse found in the world at large, they may not be used to the level of complexity commonly found in arguments on the forum. So they—at first reasonably—think they can get away with a few simple assumptions they’ve learned to expect here and there and then the point will immediately snap into understanding after reading the first paragraph. But that often doesn’t work for the level of complexity and unusualness of meaning on the forum.


So what’s wrong with defensive writing?

(If you’ve missed it, there are examples of defensive writing in the footnotes.)

Ultimately, my complaint is that it claims to be about benevolence, yet it is not actually about benevolence.[4] If the lack of benevolence was all it was, then I would have no gripe with it. But because it’s routinely confused for benevolence, good people end up being fooled by it and it causes all sorts of problems.

Remember, you’re not harming other people when they believe you are mistaken, or whether you say a wrong thing in a way that they can easily detect, or whether they think you’re a nazi because you mentioned something about DNA and you didn’t explicitly disclaim that you’re not a nazi.

See the above section for the problems with defensive disclaimers.

It’s sort of the opposite of writing ostentatiously just to show off (e.g. using math or technical terms that signal impressiveness but doesn’t actually help with understanding). But not because it’s any less self-centered. Wasting the reader’s time by trying to influence their impression of you is more or less the same regardless of whether it is for defending against negative impressions or inviting positive impressions.

Another perhaps more damaging form of defensive writing is unnecessarily trying to demonstrate that you’ve done your research due diligence. As a way perhaps to establish some kind of “license” for writing about it at all. Or at least making it clear that you’ve “done work”. But if I only have one interesting thing to teach, it doesn’t matter to the reader whether I had to read 1 book or 50 books to find it. And it’d be especially indulgent to then go into detail about the contents of every one of those 49 books when you know that’s not where the value is.


I think it’s easy to misunderstand me here. What I’m essentially advocating is that that every word you write should flow from intentions in your brain that you are aware and approve of.

And what you optimise for should depend on what you’re writing. Sometimes I primarily optimise for A) making readers have true beliefs about what I believe. Other times I primarily optimise for B) providing readers with tools that help them come to their own conclusions.

When I’m doing B, it doesn’t matter what they believe I believe. If out of ten things I say, nine will be plain wrong, but one thing will be so right that it helps the reader do something amazing with their lives, I consider that a solid success.

Purpose A is necessary if you’re somewhat of an authority figure and you’re letting people know what you believe so they can defer to you. E.g. if you’re a doctor, you don’t want your patients to believe false things about your beliefs.

As it happens, I’m mostly trying to optimise this post for B, but didn’t I say that this means I shouldn’t care whether I’m misunderstood? So why am I spending time in this section on defending against misunderstandings?

Because I think this is complicated enough that readers may end up not only misunderstanding what I believe, but also misunderstanding the tools I’m trying to teach. I think people can benefit from these tools, so I want to make sure they’re understood.[5]

But also just because… well, there are limits to how brave you are required to be.

I want to be liked, so I sometimes signal modesty for no further reason than to be liked. This is ok. And sometimes I have to affect humility in order to make my writing bearable to read, because we already live in a world where that’s expected and I can’t unilaterally defect from every inadequate equilibria I see all the time—the personal cost is too high.

So I try to do what I can on the margin, show the world that I can be happy about my achievements (“bragging”), do less modesty, say more wrong things, etc. But when the cost is cheap, as is the case when you see others who defect, there are fewer excuses to not at least respect and try to feel happy about them nudging the equilibrium.

  1. ^

    Ironic, yes, but this will be our first example of defensive writing! First, the altruistic value of this paragraph is that it points out that noticing defensiveness is a difficult act of introspection. That’s usefwl for the readers to know. The rest of the paragraph only serves the purpose of making sure no one thinks I’m being arrogant.

    Notice also how when I write “I claim this is” instead of “this is”, I change absolutely nothing about what the readers learn, but I am changing readers’ impressions about how humble I’m being. If I can seem more humble by adding two more words, I usually say it’s worth the cost—but I’m aware that I am profiting only myself, and it is my readers that pay the cost.

  2. ^

    Another example. I don’t want readers to think that I believe you cannot optimise for two things at once, so I point that out. But no one in their right mind would actually think that! So by pointing it out, I’m not helping them understand anything differently about the content. The only purpose of the sentence is to prevent people who are actively looking to misunderstand me from misunderstanding me.

  3. ^

    Part of my motivation for writing this whole paragraph was to provide an example of something that could seem like it was optimised for ostentation (name-dropping a bunch of scientific fields), but was in reality optimised for something else. The point is that people are generally much too quick to judge people for perceived intentions, and if I were afraid of this then I wouldn’t have felt permitted to write this sentence. A culture of judgment and no charity will prevent people from writing sentences they actually think are helpfwl.

    In addition, notice how this usage of “in my opinion” actually does communicate important-to-the-reader information about the value of deferring to me, so it’s not just a ploy to seem more humble.

  4. ^

    Norwegian has the right word here, and it’s “nestekjærlighet”. The best I can do as a direct translation is “otherpersongoodintentions”. (Sorry.) It makes it unfailingly clear that it’s not about raising one’s own moral character—that would just be selfpersongoodintentions.

  5. ^

    I actually think nearly all the value to these exercises are in things I haven’t made the case for. Those explanations would be longer and require more stuff. But I hoped the value present would make it interesting enough, and I didn’t want to make the post longer than it already is.