Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Riffing off of the alliance mindset point, one shift I’ve personally found really helpful (though I could imagine it backfiring for other people) in decision-making settings is switching from thinking “my job is to come up with the right proposal or decision” to “my job is to integrate the evidence I’ve observed (firsthand, secondhand, etc.) and reason about it as clearly and well as I’m able”.
The first framing made me feel like I was failing if other people contributed; I was “supposed” to get to the best decision, but instead I came to the wrong one that needed to be, humiliatingly, “fixed”. The frame is more individualistic, and has more of a sense of some final responsibility that increases emotional heat and isn’t explained just by bayesian reasoning.
The latter frame evokes thoughts like “of course, what I’m able to observe and think of is only a small piece of the puzzle, of course others have lots of value of add” and shifts my experience of changing decisions from embarrassing or a sign of failure to natural and inevitable, and my orientation towards others from defensiveness to curiosity and eagerness to elicit their knowledge. And it shifts my orientation towards myself from a stakesy attempt to squeeze out an excellent product via the sheer force of emotional energy, to something more reflective, internally quiet, and focused on the outer world, not what my proposals will say about me.
I could imagine this causing people to go easy on themselves or try less hard, but for me it’s been really helpful.
Great post, possibly essential reading for community-builders; adding a link to this in several of my drafts + my retreat post. I think another important thing for CBers is to create a culture where changing your mind is high-status and having strongly held opinions without good reasons is not, which is basically the opposite of the broader culture (though I think EA does a good job of this overall). Ways I’ve tried to do this in settings with EA newcomers:
1) excitedly changing your mind—thinking of a Robi Rahmanism “The last time I changed my mind about something was right now.” This doesn’t just model openness; it also makes changing your mind a two-way street, rather than you having all the answers and they just need to learn from you, which I think makes it less identity-threatening or embarrassing to change your mind.
2) saying, in conversations with already-bought-in EAs that are in front of newcomers, things like “Hmm, I think you’re under-updating.” This shows that we expect longtime EAs to keep evaluating new evidence (and that we are comfortable disagreeing with each other) rather than just to memorize a catechism.
Strongly agree, fostering a culture of openmindedness (love the example from Robi) and the expectation of updating from more experienced EAs seems good. In the updating case, I think making sure that everyone knows what “updating” means is a priority (sounds pretty weird otherwise). Maybe we should talk about introductory Bayesian probability in fellowships and retreats.
Yes, true, avoiding jargon is important!
I might add something to the tune of “have them lead the conversation by letting their questions and vague feelings do the steering”