Whoopsie, I’m insufficiently aware of the English language conventions there. Thanks, changed.
Severin
A fancy version might be some form of integration between the EA Forum and e.g. Kialo, where forum accounts can be used to partake in the discussion trees, and forum posts can be used as discussion contributions.
This shifted my opinion towards being agnostic/mildly positive about this public statement.
I’m still concerned that some potential versions of EA getting more explicitly political might be detrimental to our discourse norms for the reasons Duncan, Chris, Liv, and I outlined in our comments. But yea, this amount of public support may definitely nudge grantmakers/donators to invest more into community health. If yes, I’m definitely in favor of that.
Ok these strong down- and disagreement-votes are genuinely mysterious to me now.
The only interpretation that comes to mind is that somebody expects that something bad could come from this offer. I can’t imagine anything bad coming from this offer, so I’d appreciate feedback. Both here where I can react, or in my admonymous is fine.
Thanks, that’s encouraging feedback!
Anyplace else you think I should advertise this? I already got the first booking. But given the mixed voting score, I don’t expect this post to still be read by anyone 2-3 days from now.
Thanks, I removed my downvote after reading this comment.
Edit: I no longer agree with the content of this comment. Jason convinced me that this pledge is worth more than just applause lights. In addition, I don’t think anymore that this is a very appropriate place for a slippery slope-argument.
_____________
I’d like to explain why I won’t sign this document, because a voice like mine seems to still be missing from the debate: Someone who is worried about this pledge while at the same time having been thoroughly involved in leftist discourse for several years pre-EA.
So here you go for my TED talk.
I’m not a Sam in a bunch of ways: I come from a working-class background. I studied continental philosophy and classical greek at an unknown small town uni in Germany (and was ashamed of that for at least my first two years of involvement with EA). Though I was thunderstruck by the simple elegance of utilitarian reasoning as a grad student, I never really developed a mind for numbers and never made reading academic papers my guilty pleasure. I’ve been with the libertarian socialists long enough before getting into EA that I’m still way better at explaining Hegel, Marx, Freud, the Frankfurt school, the battle lines between materialist and queer feminism, or how to dive a dumpster than even basic concepts of economy. In short: As far as knowing the anti-racist and anti-sexist discourse is concerned, I may well be in the 95th percentile of the EA community.
And because of all of this life experience, reading this statement sent a cold shower down my spine. Here’s why.
I have been going under female pronouns for a couple of years. That’s not a fortunate position to be in in a small German university city whose cultural discourse is always 10-20 years behind any Western capital city, especially of the anglo-saxon world. I’ve grown to love the feeling of comfort, familiarity, and safety that anti-discriminatory safe spaces provide, and I’ve actively taken part in making these spaces safe—sometimes in a more, sometimes in a less constructive tone.
But while enjoying that safety, comfort, and sense of community, I constantly lived with a nagging half-conscious fear of getting ostracized myself one day for accidentally calling the wrong piece of group consensus into question. In the meantime, I never was quite sure what the group consensus actually was, because I’m not always great at reading rooms, and because just asking all the dumb questions felt like a way too big risk for my standing in the tribe. Humility has not always been a strength of mine, and I haven’t always valued epistemic integrity over having friends.
The moment when the extent of this clusterfuck of groupthink dawned on me was after we went to the movies for a friend’s birthday party: Iron Sky 2 was on the menu. After leaving the cinema, my friend told me that during the film, she occasionally glanced over to me to gauge whether it’s “okay” to laugh about, well, Hitler riding on a T-Rex. She glanced over to me in order to gauge what’s acceptable. She, who was so radically leninist that I didn’t ever dare mention that I’m not actually really all that fond of Lenin. Because she had plenty of other wonderful qualities besides being a leninist. And had I risked getting kicked out of the tribe for a petty who’s-your-favorite-philosopher-debate, that would have been very sad.
On that day, I realized that both of us had lived with the same fear all along. And that all our radical radicalism was at least two thirds really, really stupid virtue signalling. Wiser versions of us would have cut the bullshit and said: “I really like you and I don’t want to lose you.” But we didn’t, because we were too busy virtue signalling at each other that really, you can trust me and don’t have to ostracize me, I’m totally one of the Good Guys(TM).
Later, I found the intersection between EAs and rationalists: A community that valued keeping your identity small. A community where the default response to a crass disagreement was not moral outrage or carefully reading the room to grasp the group consensus, but “Let’s double crux that!”, and then actually looking at the evidence and finding an answer or agreeing that the matter isn’t clear. A community where it was considered okay and normal and obvious to say that life sometimes involves very difficult tradeoffs. A community where it was considered virtuous to talk and think as clearly and level-headedly as possible about these difficult tradeoffs.
And in this community, I found mental frameworks that helped me understand what went wrong in my socialist bubble: Most memorably, Yudkowsky’s Politics is the Mind-Killer and his Death Spirals sequence. I’d place a bet that the majority of the people who are concerned about this commitment know their content, and that the majority of the people who support it don’t. And I think it would be good if all of us were to (re-)read them amidst this drama.
I’m a big fan of being considerate of each others’ feelings and needs (though I’m not always good at that). I’m a big fan of not being a bigot (though I’m not always good at that). Overall, I’d like EA to feel way more like the warm, familiar, supportive anti-discriminatory safe spaces of my early twenties.
Unfortunately, I don’t think this pledge makes much of a difference there.
At the same time, after I saw the destructive virtue signalling of my early 20s play out as it did, I do fear that this pledge and similar contributions to the current debate might make all the difference for breaking EA’s discourse norms.
And by “breaking EA’s discourse norms”, I mean moving them way closer to the conformity pressure and groupthink I left behind.
If we start throwing around loaded and vague buzzwords like “(anti-)sexism” and “(anti-)racism” instead of tabooing our words and talking about concrete problems, how we feel about them, and what we think needs doing in order to fix them, we might end up at the point where parts of the left seem to be right now: Ostracizing people not only when that is necessary to protect other community members from harm, but also when we merely talk past each other and are too tired from infighting to explain ourselves and try and empathize with one another.
I’d be sad about that. Because then I’d have to look for a new community all over again.
I’ve attended an online LessWrong Community Weekend co-organized by Linda and vouch for her capability to organize unconferences way beyond the level of what I thought possible.
Agreed, “knowledge capital” fits well.
And though I sometimes sound a whole lot Slytherin, I absolutely don’t want to normalize using the Dark Arts in EA community building. I’ll change the term in the initial post and link to this comment thread.
Do you have a link to a smooth definition of “ideational capital”? I googled your citation and found a book, but apparently my skill in deciphering political science essays has massively declined since university.
A meta-level remark: I notice I’m a bit emotionally attached to “memetic capital”, because I’ve thought about these things under the term “memetics” a bunch during the last year. In addition, a person whose understanding of cultural evolution I admire uses to speak about it in terms of memetics, so there’s some matters of tribal belonging at play for me. Just flagging this, because it makes me prone to rationalization and overestimating the strength of my reasons to defend the term “memetic capital”.
_____________
Now to why I genuinely think “memetic capital” is more fitting:
1. It is useful not only for talking about propositional knowledge.
When I read “ideational capital”, or generally “ideas”, I initially think exclusively of propositional knowledge, i.e. things that can be stated as facts in natural language, like “Tel Aviv is a city in Israel.” But there are other forms of knowledge than propositional knowledge. John Vervaeke, for example, describes the “4 Ps of knowledge”:- Propositional knowing (see above)
- Procedural knowing (knowledge how to do things, e.g. ride a bicycle, or fill a tax form)
- Perspectival knowing (knowledge of where and how you are situated in the world as an embodied being, e.g. where up and down is, that this is a computer and that a glass door I can’t just pass through without opening it.)
- Participatory knowing (knowledge of how to move in the world. E.g., whether you feel stuck and confused staring at a bouldering problem, or just get into flow while your hands and feet find the right places at the wall almost on their own.)
In my understanding, memetics is useful to describe all four forms of knowing, while at the first glance, “ideational capital” only refers to the propositional kind. And in my opinion, the more interesting aspects of memetic capital are procedural, perspectival, and participatory knowing. It’s better than nothing to have propositional knowledge that Double Crux exists and what the key steps listed in the CFAR handbook are. But the more interesting, and more important, thing is having an intuitive grasp of the spirit of the method, and intuitively, without thinking, applying it in a conversation like this one.
2. Memetics lends itself to a systemic, rather than engineering-approach to understanding and influencing social systems.
The observation that memes’ evolutionary fitness is orthogonal to their usefulness points out a problem, but it also helps us get a better grasp of which strategies for spreading valuable memes might and might not work. For example, if we ask “Which core concepts should more people in EA know?”, we end up writing a curriculum, like the post above. However, we can also ask “Which trajectory does EA’s cultural evolution have, and how can we influence that trajectory so that it flows into a more desirable manner and direction?” Then, we might discover more and completely different attack routes. For example, we end up with a call to action and an offer to connect group organizers to facilitators like the one at the end of my post.
I wrote the following on Facebook in a precursor to this EAF post:In groups with a healthy debate culture, the process of group beliefs around a polarizing topic shifting looks a lot less like people convincing each other with arguments and a lot more like osmosis. Because the main thing that has to be addressed is not peoples’ consciously held beliefs, but the underlying values, aspects of personal identity and belonging, and peoples’ emotions about them.
What (I think) happens is that at first, the other side sounds like outrageous out-group nonsense. Then you hear more and more people you *like* repeating the out-group’s opinions. With each time, you empathize a tiny bit more with that opinion, and over time, it feels less and less like evil out-group nonsense and more and more like “that’s coherent, though not the way I’d go about things.” And gradually, people converge towards a new stable equilibrium of how to do stuff that accounts for all of the formerly polarized interests.
As it is based on evolutionary theory, I think memetics is particularly well-suited for describing and understanding processes of cultural evolution like the one outlined above that I think is currently happening in EA. And the better we understand these processes, the better can we intervene on them to prevent bad things. These bad things could be an EA-internal culture war, or even the community eventually breaking apart because it can’t handle its own diversity.
When I say “memetic capital”, I don’t have a specific set of timeless ideas in mind that all the EAs should know/should have known about all along. Instead, I think of an ever-changing egregore of ideas, processes, traditions, social customs, social and psychotechnologies, figures of speech and whathaveyou. And the reference to evolutionary theory the term “memetic” implies feels very elegant to me for pointing at this egregore.
The crux of EA’s (or any social bubble’s) memetic capital is that it is hard to inventorize, hard to steer, and that in some sense, we are its servants rather than its masters. I think memetics can describe that. And in the process of describing, it can help us gain more agency over which memes we want to keep and which ones we want to get rid of.
_____________
Do let me know whether this makes sense or sounds like total gibberish to you. I’m thinking/explaining all these things for the first time. And the way I think about social systems is influenced way more by continental philosophy, psychoanalysis, systems theory, and Buddhism than by the more STEM/engineering-style approaches that are common in EA. Thus, I have no clue whether I’m at all understandable for people who read other books than me.
Yup, the evolution and spreading of memes is not always aligned with the good, true, and beautiful. But the same is true for genetics, the field which memetics is originally based on.
Human evolution shaped our gene pool in a way that makes us prone to some biases, some forms of prejudice, and violence. But that is just one of the many aspects of these theories. That these are facts about genetics doesn’t mean that genetics and evolution themselves are evil or tainted and something you shouldn’t associate yourself with unless you are a hateful bigot. If we were to follow your rule to the end, we would have to invent all the vocabulary of genetical research from scratch because racists sometimes like to talk about genetics as well. Memetics is not a trivial field of knowledge, and I think just as with genetics, obfuscating the valuable work that has already been done there by reinventing the wheel from scratch with a fresh branding is way too costly.
As we’re at Wikipedia-ing, mind the introductory definition in the meme-article:A meme (/miːm/ MEEM)[1][2][3] is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.[4] A meme acts as a unit for carrying cultural ideas, symbols, or practices, that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme.
That description is entirely independent of how valuable/damaging memes are, and pretty much exactly what I mean. “The memetic capital of the EA community”, then, is the amount of good and useful memes we have readily available in the community, alongside with the absence of bad and harmful memes.
I disagree.
That makes sense from first principles, but collides with convention in a way I’d rather not risk.
Since Bourdieu, “cultural capital” is a pretty loaded term in sociology. My off-the-cuff definition would be something along these lines: “The amount of competences and resources you have available for signalling successfully that you fit in with the upper strata of society.”
Often enough, signalling cultural capital in the sociological sense and spreading memetic capital in my sense are outright incompatible goals. For example, for spreading valuable memetic capital, it might make sense to diligently follow Rule 0, and to say “Let’s Yes/No-debate that!” any time a political issue arises. If you do that at a tea party hosted by the British royal family, you will probably not be invited again.
On another note, memetics is a field of knowledge that I think community builders should know way more about. It’s just remarkably useful for developing intuitions around PR, infohazards, which programs to run and why, etc. Part of my intention behind choosing that handle is making memetics in general a bigger thing in EA. And just using the word “memetic” very often seems to me like a less-than-terrible way to sneak more knowledge about memetics into EA’s memetic capital.
I have 15+ hours experience in running Active Hope workshops, and 50+ as a participant. Happy to chat if anyone wants to dive in more deeply.
Specifically, I’ve done a bunch of thinking on how to adapt the deep ecology-based models of Active Hope workshops to an orthodox AI safety audience. For more info on the general framework, further resources, and a list of 7 self-reflection prompts to try it out, feel free to take a look at this writeup I made for the attendees of a workshop at LessWrong Community Weekend 2022.
Thank you. Knowing you are part of this movement makes me want to stay involved because of, not despite the EAs.
For anyone interested in doing this: Ollie Base and Linda Linsefors already expressed interest to support it on LessWrong.
Unfortunately not, the event depends on a lot on in-person interaction. The CFAR handbook is available online though, and there are at least two LessWrong sequences which try to teach the CFAR skills in a more interactive and practice-oriented manner than the handbook: Hammertime, Training Regime.
Note that we changed the location to teamwork!
The Berlin Hub post-mortem
In April 2022, we announced our plan for The Berlin Hub, a longtermist/AI safety co-living and event space in Berlin. Aurea moved faster than us and has already opened the doors of another longtermist group house in Berlin. Though their theory of change is a bit different from ours, we think the expected counterfactual value of following through on the hub decreased so strongly with their launch that it makes sense to halt the project for now and wait for what might grow out of Aurea before starting other group houses in Berlin or elsewhere.
In addition, after the crypto crash earlier this year, our funding would only have sufficed for the less ambitious version of this project with about 8-10 bedrooms. That is one of the reasons why we delayed the launch in summer and planned to apply for more funding now. Given the FTX crash, we don’t think anymore that it would be an effective use of community resources to fund more ambitious projects rather than try and sustain what already exists.
While the Hub didn’t succeed, we did learn a wealthy amount of lessons on trying ambitious projects along the way. Here is some advice we’d give our past selves:
1. Stay lean.
Don’t waste time and money on more organizational overhead than is absolutely necessary.
Aurea didn’t strictly move faster than we did: The seed of the project already existed as a private flat when we started working on the hub. They moved in together, saw how that went, and decided to expand from there. We, however, started out with thinking about how to encourage a healthy house culture, mitigate downside risks, how to best set up an application process, research on which is the best organizational form for such a project, which skills I’d still need to build to fulfill the community manager role, et cetera et cetera. While EA tends to incentivize big-picture thinking, for new and ambitious projects with limited precedents like this, it appears to be more useful to set one foot in front of the other, decide only then where to go next, and only build additional infrastructure if it seems inevitable.
Instead, build lots of minimum viable products (MVPs), however bulletproof your Grand Plan looks.
People were impressed with the clarity and detail of our reasoning in the Hub announcement. In addition, because the reference class of co-living projects with the stated purpose of saving the world doesn’t look great, we created several pages of unpublished (and not yet comprehensive) writing on how to mitigate downside risks for such a project, how to create a healthy community, and other topics. We do think this upfront research was sensible. However, in hindsight, doing more small-scale trial-and-error alongside would have gone a long way: Starting with longer and longer incubator-style retreats as the first MVP, finding a core cohort to found a not-yet fully public shared flat with, and iterate and grow from there.
If the startup jargon of MVPs is new to you, here is a remarkably concise and informative writeup by Henrik Kniberg.
Start co-living spaces with a core cohort.
Our latest community plan for Ithaka involved building around a dedicated core cohort of 5-10 residents who sustain a consistent culture and reduce memetic loss through too much flowthrough. With that core group in place, we’d have wanted to invite short-term guests over to bring in new ideas, learn from the longer-term residents, and build their network. Collecting that core cohort through retreats and networking *before* going on the lookout for housing and funding would have made the whole project significantly easier. Here are some reasons:
With a group of people, we could have directly rented a flat together to start an informal group house, instead of going through the paperwork associated with setting up a charity in Germany.
This would have made us significantly less dependent on external funding, and more resilient to the crypto crash earlier this year that hit one of our seed funders hard.
When starting community spaces, work with the local community right from the start.
This is one of the things that are completely obvious in hindsight, but weren’t at the start. Work on the hub started almost a year ago at CEEALAR, a similar space in England. This made sense to me at the time: It gave me first-hand experience with how life in such a community might be, how the group dynamics work, and which routines and rituals may help make it go well.
This gave me the opportunity to learn a lot of non-obvious useful things about community dynamics. For example, in co-living spaces with continuous flowthrough, peoples’ social energy for relating to others and organizing events for the community seems to be highest at the start, and diminishes over time while the sparkle of novelty ceases, people develop their own routines, and become less open to form ever-new bonds with newcomers every other week. This is why we planned to have short-term visitors arrive in fixed three-week-long cohorts at the start of each month. While the long-term guests could have provided memetic and cultural stability, the short-term guests would have had the opportunity to share the sparkle of novelty with one another. And not only the sparkle of novelty: They could have settled into a more work-minded mode synchronously, until all of them leave before the end of the month and the core cohort has a week to fully focus on each other and their own projects. A pulsing motion of expansion and contraction.
At the same time, starting the ideation phase at CEEALAR meant that I couldn’t do any on-the-ground networking in Berlin before the start of the summer, when we moved over to start looking for houses and working on the charity paperwork with a local lawyer. This was a mistake: An outward-facing project this size can hardly be carried by two people. It needs a whole community to grow into and out of. It needs to understand the local needs and priorities, the existing community building endeavors and bottlenecks, etc. While we planned to build close ties to the local community after opening doors, it would have made sense to understand Berlin’s lively local EA community as crucially important stakeholders right from the start. Concretely, it would have made sense to build connections to as many Berlin-based community builders and longtermist-adjacent EAs as possible from the start. This could have left us with a decent core cohort already. In addition, a gears-level understanding of how the Berlin community works would have been useful evidence while thinking about the community plan for the house. At the early stages of the project, a solid local network could have come in handy during the location search: Many good options on the competitive Berlin housing market never make it into public announcements; relationships are everything. That may not be true for the most ambitious versions of this project, but it definitely is for an Aurea-style decentralized group house spread out over several flats.
Many people who filled expressions of interest were thrilled to help make the space happen, but with most tasks being rapidly changing desk work that seemed to need a complete overview of the project, I didn’t find ways to properly leverage the community energy through more delegation. Among others, because most of the interested people were not locals.
This is another thing Aurea seem to have done right: Already their opening party was a lively mix of local EAs, entrepreneurs, and neighbors. They built the community ties first, and the group house itself afterward.
A useful article in this context: Start with “who”.
2. Make extra space for building your co-founder relationship.
Founding a startup is like getting married: You have to talk about your shared vision for the future, figure out how to communicate well with one another, talk about commitments and responsibilities, finances, et cetera. While we as the Hub’s founding team had great chemistry at the start, we found that our styles of working and communication could have been a more perfect match. It would have made sense for us to buckle down and figure all of this out right away instead of over the course of our work together. You may want to have a thorough conversation right at the start of the project about how each of you works best. Identify your individual strengths, weaknesses, and quirks, as well as potential synergies and points of conflicts when they come together. We did this alongside starting out on the object-level work. Next time I’d start such an ambitious project with someone, I might want to go all-in on teambuilding and lock ourselves up in a cabin in the woods for at least a weekend, better a week.
First Round’s 50 questions to Explore with a Potential Co-Founder may be a good start for this. For bonus bulletproofness, finish off by applying CFAR’s murphyjitsu to your co-founder relationship.
Some potentially useful background models:
Google’s Project Aristotle-study on predictors of team performance. They found that the biggest predictor of team performance is whether or not the team members have a feeling of psychological safety, i.e. “an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive.” As I consider this an undervalued key component of EA community building, I’ll write more EAF posts on it in the next months.
Bruce Tuckman’s four stages of group development. In 1965, Bruce Tuckman proposed that groups tend to go through four consecutive stages: Forming, storming, norming, and performing. The bottom line of this model, in one sentence: Instead of dreading and avoiding it, invite team conflict as an opportunity for growth, for developing psychological safety, and for finding effective ways for working together.
3. Funding: Don’t be over-invested in crypto.
This one is the hardest to give advice on. While it’s a fact that the crypto crash hit one of our seed funders and significantly reduced our start capital, it would have been hard for us to do anything different. Withdraw the crypto funding ASAP? That was not feasible without having the charity paperwork in order. Apply to more funding from different sources early in the year to be maximally safe? Definitely.
What next?
Our next step is to wait out how Aurea and the Berlin EA ecosystem in general evolve. Maybe it will make sense to pick this project up again a few years from now, maybe not.
I’d like to have an inventory of EA inventories like this.
Here is the seed. Does anyone want to take ownership for maintaining it?
Don’t ask what EA can do for you, ask what you can do for EA.
An obvious-in-hindsight statement I recently heard from a friend:
While this makes complete sense in theory, it is emotionally difficult to commit to it if most of your friends are in EA. This makes it hard for us to evaluate our impact on the community properly. Motivated reasoning is a thing.
So, it may be wothwhile for us to occasionally reflect on the following questions:
If I were to look back in ten years and find that my presence, in hindsight, was bad for EA. What were the reasons?
Who could I ask for an honest evaluation of which bits of my behavior serve the cause, and which harm it?
If I were to decide that my presence harms the community. How would I get my social needs met anyways?