A hedging I’d add: ”...unless these people know each other from outside the boardgame club”.
Severin
“We established a policy that established members, especially members of the executive, were to refrain from hitting on or sleeping with people in their first year at the society.”
This sounds super reasonable for EA, too. How would you enforce/communicate this?
Full disclosure, because without it, this post would be a bit phony: I haven’t always followed this policy within EA or outside, and took just one or two weeks from first thinking it might be good to implement it in EA to writing this post.
In general, if I write about community dynamics, assume that I think about them this thoroughly not because I’m extraordinary virtuous and clear-sighted in regards to people stuff, but because I’m sometimes socially a bit clumsy and all these models and methods help me function at a level that just comes naturally to others. The question guiding my posts on community dynamics is generally something like: “What would I-from-ten-years-ago have needed to know to not make the same mistakes I did?”
Yep, I’m with Xavier here. The rule incentivizes community builders a bit to not make EA their only social bubble (which is inherently good I think). And it is not without workarounds, all of which cushion the addressed problem.
For example, it encourages local community builders to hand over event facilitation to others more often. And if the rule is publicly known, participants can take a break from events that one leader leads to get around the rule. If participants don’t know the rule, they’d get informed about its existence when they hit on an organizer. In either case, the consequence of even intentionally working around the rule would be taking it slow.
Yup, “don’t hit on people who don’t hit on me first.” is a weaker rule I already decided to adhere to in EA before I started thinking about the one outlined in this post. Independent of power, it just seems utterly necessary to manage the gender imbalance.
Yep, the problem this particular rule tries to fix is that of perceived power imbalance and all the troubles that come with it.
It is an imperfect proxy for sure, but non-proxy rules like “No dating if there is a perceived power imbalance.” are very, very prone to tempt people into motivated reasoning. It can get very hard for humans to evaluate their power imbalance with Alice when oh damn are these freckles cute. False beliefs, from the inside, feel not like beliefs, but like the truth. Because of that, I wouldn’t trust anyone with power who would trust themselves with power.
Note also that while “Bob has power over Alice’s career” is a significant component of how power works in EA, power in humans has many more subtle nuances than factual access to resources. Even without explicit concerns like “If I don’t do what Bob wants, Bob will make my career progression harder.”, power is shiny and overpowering and does all kinds of funny things to our monkey brains. See for example how our brains automatically adjust what we consider good fashion choices to who we deem popular in our particular subcultural bubble, how we mold our habits by them, etc.
For a more crass example, the 20th century had its wealthy share of spiritual leaders with sex scandals. Though e.g. Osho had no power over his followers’ real-world careers, they worshipped him like a demigod. I think it goes without question that it would be if not impossible at least outstandingly difficult for him to have a truly consensual relationship with one of his followers. Because there’s no true “yes” without an easy “no”, and there’s no easy “no” if the prophet himself calls you to his quarters.
(Which is of course very sad and inconvenient for Osho and requirement to adhere to this rule might have turned him off guruing completely, because the list of documented 20th century female gurus is short.)
I know that the rule is non-negotiable for people who facilitate retreats under the AuthRev brand.
AuthRev is rather influential in the (especially north american) AR scene, so I wouldn’t be surprised if the rule seeped out further from there. I’m not well-networked enough there to know the details. And even if I could, I don’t think I’d want to share the saucy stories that lead to people adjusting the timelines upward and downward until they found their current form.
Thanks a lot! Yep, a question I always ask myself in EA’s diversity discussions is “Which kind of diversity are we talking about?”
A LessWrong post on the topic you might like if you didn’t read it yet is Kaj Sotala’s “You can never be universally inclusive”.
Don’t ask what EA can do for you, ask what you can do for EA.
An obvious-in-hindsight statement I recently heard from a friend:
“If I’d believe that me being around was net negative for EA, I’d leave the community.”
While this makes complete sense in theory, it is emotionally difficult to commit to it if most of your friends are in EA. This makes it hard for us to evaluate our impact on the community properly. Motivated reasoning is a thing.
So, it may be wothwhile for us to occasionally reflect on the following questions:
If I were to look back in ten years and find that my presence, in hindsight, was bad for EA. What were the reasons?
Who could I ask for an honest evaluation of which bits of my behavior serve the cause, and which harm it?
If I were to decide that my presence harms the community. How would I get my social needs met anyways?
Whoopsie, I’m insufficiently aware of the English language conventions there. Thanks, changed.
A fancy version might be some form of integration between the EA Forum and e.g. Kialo, where forum accounts can be used to partake in the discussion trees, and forum posts can be used as discussion contributions.
This shifted my opinion towards being agnostic/mildly positive about this public statement.
I’m still concerned that some potential versions of EA getting more explicitly political might be detrimental to our discourse norms for the reasons Duncan, Chris, Liv, and I outlined in our comments. But yea, this amount of public support may definitely nudge grantmakers/donators to invest more into community health. If yes, I’m definitely in favor of that.
Ok these strong down- and disagreement-votes are genuinely mysterious to me now.
The only interpretation that comes to mind is that somebody expects that something bad could come from this offer. I can’t imagine anything bad coming from this offer, so I’d appreciate feedback. Both here where I can react, or in my admonymous is fine.
Thanks, that’s encouraging feedback!
Anyplace else you think I should advertise this? I already got the first booking. But given the mixed voting score, I don’t expect this post to still be read by anyone 2-3 days from now.
Thanks, I removed my downvote after reading this comment.
Edit: I no longer agree with the content of this comment. Jason convinced me that this pledge is worth more than just applause lights. In addition, I don’t think anymore that this is a very appropriate place for a slippery slope-argument.
_____________
I’d like to explain why I won’t sign this document, because a voice like mine seems to still be missing from the debate: Someone who is worried about this pledge while at the same time having been thoroughly involved in leftist discourse for several years pre-EA.
So here you go for my TED talk.
I’m not a Sam in a bunch of ways: I come from a working-class background. I studied continental philosophy and classical greek at an unknown small town uni in Germany (and was ashamed of that for at least my first two years of involvement with EA). Though I was thunderstruck by the simple elegance of utilitarian reasoning as a grad student, I never really developed a mind for numbers and never made reading academic papers my guilty pleasure. I’ve been with the libertarian socialists long enough before getting into EA that I’m still way better at explaining Hegel, Marx, Freud, the Frankfurt school, the battle lines between materialist and queer feminism, or how to dive a dumpster than even basic concepts of economy. In short: As far as knowing the anti-racist and anti-sexist discourse is concerned, I may well be in the 95th percentile of the EA community.
And because of all of this life experience, reading this statement sent a cold shower down my spine. Here’s why.
I have been going under female pronouns for a couple of years. That’s not a fortunate position to be in in a small German university city whose cultural discourse is always 10-20 years behind any Western capital city, especially of the anglo-saxon world. I’ve grown to love the feeling of comfort, familiarity, and safety that anti-discriminatory safe spaces provide, and I’ve actively taken part in making these spaces safe—sometimes in a more, sometimes in a less constructive tone.
But while enjoying that safety, comfort, and sense of community, I constantly lived with a nagging half-conscious fear of getting ostracized myself one day for accidentally calling the wrong piece of group consensus into question. In the meantime, I never was quite sure what the group consensus actually was, because I’m not always great at reading rooms, and because just asking all the dumb questions felt like a way too big risk for my standing in the tribe. Humility has not always been a strength of mine, and I haven’t always valued epistemic integrity over having friends.
The moment when the extent of this clusterfuck of groupthink dawned on me was after we went to the movies for a friend’s birthday party: Iron Sky 2 was on the menu. After leaving the cinema, my friend told me that during the film, she occasionally glanced over to me to gauge whether it’s “okay” to laugh about, well, Hitler riding on a T-Rex. She glanced over to me in order to gauge what’s acceptable. She, who was so radically leninist that I didn’t ever dare mention that I’m not actually really all that fond of Lenin. Because she had plenty of other wonderful qualities besides being a leninist. And had I risked getting kicked out of the tribe for a petty who’s-your-favorite-philosopher-debate, that would have been very sad.
On that day, I realized that both of us had lived with the same fear all along. And that all our radical radicalism was at least two thirds really, really stupid virtue signalling. Wiser versions of us would have cut the bullshit and said: “I really like you and I don’t want to lose you.” But we didn’t, because we were too busy virtue signalling at each other that really, you can trust me and don’t have to ostracize me, I’m totally one of the Good Guys(TM).
Later, I found the intersection between EAs and rationalists: A community that valued keeping your identity small. A community where the default response to a crass disagreement was not moral outrage or carefully reading the room to grasp the group consensus, but “Let’s double crux that!”, and then actually looking at the evidence and finding an answer or agreeing that the matter isn’t clear. A community where it was considered okay and normal and obvious to say that life sometimes involves very difficult tradeoffs. A community where it was considered virtuous to talk and think as clearly and level-headedly as possible about these difficult tradeoffs.
And in this community, I found mental frameworks that helped me understand what went wrong in my socialist bubble: Most memorably, Yudkowsky’s Politics is the Mind-Killer and his Death Spirals sequence. I’d place a bet that the majority of the people who are concerned about this commitment know their content, and that the majority of the people who support it don’t. And I think it would be good if all of us were to (re-)read them amidst this drama.
I’m a big fan of being considerate of each others’ feelings and needs (though I’m not always good at that). I’m a big fan of not being a bigot (though I’m not always good at that). Overall, I’d like EA to feel way more like the warm, familiar, supportive anti-discriminatory safe spaces of my early twenties.
Unfortunately, I don’t think this pledge makes much of a difference there.
At the same time, after I saw the destructive virtue signalling of my early 20s play out as it did, I do fear that this pledge and similar contributions to the current debate might make all the difference for breaking EA’s discourse norms.
And by “breaking EA’s discourse norms”, I mean moving them way closer to the conformity pressure and groupthink I left behind.
If we start throwing around loaded and vague buzzwords like “(anti-)sexism” and “(anti-)racism” instead of tabooing our words and talking about concrete problems, how we feel about them, and what we think needs doing in order to fix them, we might end up at the point where parts of the left seem to be right now: Ostracizing people not only when that is necessary to protect other community members from harm, but also when we merely talk past each other and are too tired from infighting to explain ourselves and try and empathize with one another.
I’d be sad about that. Because then I’d have to look for a new community all over again.
I’ve attended an online LessWrong Community Weekend co-organized by Linda and vouch for her capability to organize unconferences way beyond the level of what I thought possible.
Agreed, “knowledge capital” fits well.
And though I sometimes sound a whole lot Slytherin, I absolutely don’t want to normalize using the Dark Arts in EA community building. I’ll change the term in the initial post and link to this comment thread.
Do you have a link to a smooth definition of “ideational capital”? I googled your citation and found a book, but apparently my skill in deciphering political science essays has massively declined since university.
A meta-level remark: I notice I’m a bit emotionally attached to “memetic capital”, because I’ve thought about these things under the term “memetics” a bunch during the last year. In addition, a person whose understanding of cultural evolution I admire uses to speak about it in terms of memetics, so there’s some matters of tribal belonging at play for me. Just flagging this, because it makes me prone to rationalization and overestimating the strength of my reasons to defend the term “memetic capital”.
_____________
Now to why I genuinely think “memetic capital” is more fitting:
1. It is useful not only for talking about propositional knowledge.
When I read “ideational capital”, or generally “ideas”, I initially think exclusively of propositional knowledge, i.e. things that can be stated as facts in natural language, like “Tel Aviv is a city in Israel.” But there are other forms of knowledge than propositional knowledge. John Vervaeke, for example, describes the “4 Ps of knowledge”:- Propositional knowing (see above)
- Procedural knowing (knowledge how to do things, e.g. ride a bicycle, or fill a tax form)
- Perspectival knowing (knowledge of where and how you are situated in the world as an embodied being, e.g. where up and down is, that this is a computer and that a glass door I can’t just pass through without opening it.)
- Participatory knowing (knowledge of how to move in the world. E.g., whether you feel stuck and confused staring at a bouldering problem, or just get into flow while your hands and feet find the right places at the wall almost on their own.)
In my understanding, memetics is useful to describe all four forms of knowing, while at the first glance, “ideational capital” only refers to the propositional kind. And in my opinion, the more interesting aspects of memetic capital are procedural, perspectival, and participatory knowing. It’s better than nothing to have propositional knowledge that Double Crux exists and what the key steps listed in the CFAR handbook are. But the more interesting, and more important, thing is having an intuitive grasp of the spirit of the method, and intuitively, without thinking, applying it in a conversation like this one.
2. Memetics lends itself to a systemic, rather than engineering-approach to understanding and influencing social systems.
The observation that memes’ evolutionary fitness is orthogonal to their usefulness points out a problem, but it also helps us get a better grasp of which strategies for spreading valuable memes might and might not work. For example, if we ask “Which core concepts should more people in EA know?”, we end up writing a curriculum, like the post above. However, we can also ask “Which trajectory does EA’s cultural evolution have, and how can we influence that trajectory so that it flows into a more desirable manner and direction?” Then, we might discover more and completely different attack routes. For example, we end up with a call to action and an offer to connect group organizers to facilitators like the one at the end of my post.
I wrote the following on Facebook in a precursor to this EAF post:In groups with a healthy debate culture, the process of group beliefs around a polarizing topic shifting looks a lot less like people convincing each other with arguments and a lot more like osmosis. Because the main thing that has to be addressed is not peoples’ consciously held beliefs, but the underlying values, aspects of personal identity and belonging, and peoples’ emotions about them.
What (I think) happens is that at first, the other side sounds like outrageous out-group nonsense. Then you hear more and more people you *like* repeating the out-group’s opinions. With each time, you empathize a tiny bit more with that opinion, and over time, it feels less and less like evil out-group nonsense and more and more like “that’s coherent, though not the way I’d go about things.” And gradually, people converge towards a new stable equilibrium of how to do stuff that accounts for all of the formerly polarized interests.
As it is based on evolutionary theory, I think memetics is particularly well-suited for describing and understanding processes of cultural evolution like the one outlined above that I think is currently happening in EA. And the better we understand these processes, the better can we intervene on them to prevent bad things. These bad things could be an EA-internal culture war, or even the community eventually breaking apart because it can’t handle its own diversity.
When I say “memetic capital”, I don’t have a specific set of timeless ideas in mind that all the EAs should know/should have known about all along. Instead, I think of an ever-changing egregore of ideas, processes, traditions, social customs, social and psychotechnologies, figures of speech and whathaveyou. And the reference to evolutionary theory the term “memetic” implies feels very elegant to me for pointing at this egregore.
The crux of EA’s (or any social bubble’s) memetic capital is that it is hard to inventorize, hard to steer, and that in some sense, we are its servants rather than its masters. I think memetics can describe that. And in the process of describing, it can help us gain more agency over which memes we want to keep and which ones we want to get rid of.
_____________
Do let me know whether this makes sense or sounds like total gibberish to you. I’m thinking/explaining all these things for the first time. And the way I think about social systems is influenced way more by continental philosophy, psychoanalysis, systems theory, and Buddhism than by the more STEM/engineering-style approaches that are common in EA. Thus, I have no clue whether I’m at all understandable for people who read other books than me.
Thanks for writing this up. I agree with most of these points. However, not with the last one:
If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:
Funded community builders would possibly have even stronger incentives to prioritize community growth over sustainable planning, accountability infrastructure, and community health. To my knowledge, CEA’s past funding policy incentivized community builders to goodhart on acquiring new talent and funds, at the cost of building sustainable network and structural capital, and at the cost of fostering constructive community norms and practices. As long as one avoided to visibly damage the EA brand or turn the very most talented individuals off, it just was financially unreasonable to pay much attention to these things.
In other words, the financial incentives so far may have forced community builders into becoming the hard-core utilitarians you are concerned about. And accordingly, they were forced to be role models of hard-core utilitarianism for those they built community for. This may have contributed to EA orthodoxy pre-FTX collapse, where it seemed to me that hard-core utilitarianism was generally considered synonymous to value-alignedness/high status.
I don’t expect this problem to get better if the bar for getting/remaining funded as a community builder gets higher—unless the metrics change significantly.
Access to informal networks would become even more crucial than it already is. If we take money out of community building, we apply optimization pressure away from welcomingness/having low entry barriers to the community. Even more of EA’s onboarding and mentorship than is already the case will be tied to informal networks. Junior community members will experience even stronger pressure to try and get invited to the right parties, impress the right people, to become friends and lovers with those who have money and power.
Accordingly, I suspect that the actual answer here is more professionalization, and into a different direction. Specifically:
Turning EA community building from a career stepstone into a long-term career, with proper training, financial security, and everything. (CEA already thought of this of course; I don’t find the relevant post.)
Having more (and more professionalized) community health infrastructure in national and local groups. For example, point people that community members actually know and can talk to in-person.
CEA’s community health team is important, and for all I know, they are doing a fairly impressive job. But I think the bar for reaching out to community health people could be much lower than it currently is. For many community members, CEA’s team are just strangers on the internet, and I suspect that all too many new community members (i.e. those most vulnerable to power abuse/harassment/peer pressure) haven’t heard of them in the first place.
Creating stronger accountability structures in national and local groups, like a board of directors that oversees larger local groups’ work without being directly involved in it. (For example, EA Munich recently switched to a board structure, and we are working on that in Berlin ourselves.)
For this to happen, we would need more experienced and committed people in community building. While technically, a board of directors can be staffed by volunteers entirely, withdrawing funding and prestige from EA community building will make it more difficult to get the necessary number of sufficiently experienced and committed people enrolled.
Thoughts, disagreement?
(Disclaimer on conflict of interest: I’m currently EA Berlin’s Community Coordinator and fundraising to turn that into a paid role.)