“We established a policy that established members, especially members of the executive, were to refrain from hitting on or sleeping with people in their first year at the society.”
This sounds super reasonable for EA, too. How would you enforce/communicate this?
Severin
Suggestion: A workable romantic non-escalation policy for EA community builders
The Berlin Hub: Longtermist co-living space (plan)
EA is going through a bunch of conflict. Here’s some social technologies that may help.
Community building: Lessons from ten years of facilitation experience
Dissatisfied with the state of EA? You can give me a call.
AGI Safety Needs People With All Skillsets!
Thanks for writing this up. I agree with most of these points. However, not with the last one:
I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well.
If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:
Funded community builders would possibly have even stronger incentives to prioritize community growth over sustainable planning, accountability infrastructure, and community health. To my knowledge, CEA’s past funding policy incentivized community builders to goodhart on acquiring new talent and funds, at the cost of building sustainable network and structural capital, and at the cost of fostering constructive community norms and practices. As long as one avoided to visibly damage the EA brand or turn the very most talented individuals off, it just was financially unreasonable to pay much attention to these things.
In other words, the financial incentives so far may have forced community builders into becoming the hard-core utilitarians you are concerned about. And accordingly, they were forced to be role models of hard-core utilitarianism for those they built community for. This may have contributed to EA orthodoxy pre-FTX collapse, where it seemed to me that hard-core utilitarianism was generally considered synonymous to value-alignedness/high status.
I don’t expect this problem to get better if the bar for getting/remaining funded as a community builder gets higher—unless the metrics change significantly.Access to informal networks would become even more crucial than it already is. If we take money out of community building, we apply optimization pressure away from welcomingness/having low entry barriers to the community. Even more of EA’s onboarding and mentorship than is already the case will be tied to informal networks. Junior community members will experience even stronger pressure to try and get invited to the right parties, impress the right people, to become friends and lovers with those who have money and power.
Accordingly, I suspect that the actual answer here is more professionalization, and into a different direction. Specifically:
Turning EA community building from a career stepstone into a long-term career, with proper training, financial security, and everything. (CEA already thought of this of course; I don’t find the relevant post.)
Having more (and more professionalized) community health infrastructure in national and local groups. For example, point people that community members actually know and can talk to in-person.
CEA’s community health team is important, and for all I know, they are doing a fairly impressive job. But I think the bar for reaching out to community health people could be much lower than it currently is. For many community members, CEA’s team are just strangers on the internet, and I suspect that all too many new community members (i.e. those most vulnerable to power abuse/harassment/peer pressure) haven’t heard of them in the first place.Creating stronger accountability structures in national and local groups, like a board of directors that oversees larger local groups’ work without being directly involved in it. (For example, EA Munich recently switched to a board structure, and we are working on that in Berlin ourselves.)
For this to happen, we would need more experienced and committed people in community building. While technically, a board of directors can be staffed by volunteers entirely, withdrawing funding and prestige from EA community building will make it more difficult to get the necessary number of sufficiently experienced and committed people enrolled.
Thoughts, disagreement?
(Disclaimer on conflict of interest: I’m currently EA Berlin’s Community Coordinator and fundraising to turn that into a paid role.)
AGI safety field building projects I’d like to see
The Berlin Hub post-mortem
In April 2022, we announced our plan for The Berlin Hub, a longtermist/AI safety co-living and event space in Berlin. Aurea moved faster than us and has already opened the doors of another longtermist group house in Berlin. Though their theory of change is a bit different from ours, we think the expected counterfactual value of following through on the hub decreased so strongly with their launch that it makes sense to halt the project for now and wait for what might grow out of Aurea before starting other group houses in Berlin or elsewhere.
In addition, after the crypto crash earlier this year, our funding would only have sufficed for the less ambitious version of this project with about 8-10 bedrooms. That is one of the reasons why we delayed the launch in summer and planned to apply for more funding now. Given the FTX crash, we don’t think anymore that it would be an effective use of community resources to fund more ambitious projects rather than try and sustain what already exists.
While the Hub didn’t succeed, we did learn a wealthy amount of lessons on trying ambitious projects along the way. Here is some advice we’d give our past selves:
1. Stay lean.
Don’t waste time and money on more organizational overhead than is absolutely necessary.
Aurea didn’t strictly move faster than we did: The seed of the project already existed as a private flat when we started working on the hub. They moved in together, saw how that went, and decided to expand from there. We, however, started out with thinking about how to encourage a healthy house culture, mitigate downside risks, how to best set up an application process, research on which is the best organizational form for such a project, which skills I’d still need to build to fulfill the community manager role, et cetera et cetera. While EA tends to incentivize big-picture thinking, for new and ambitious projects with limited precedents like this, it appears to be more useful to set one foot in front of the other, decide only then where to go next, and only build additional infrastructure if it seems inevitable.
Instead, build lots of minimum viable products (MVPs), however bulletproof your Grand Plan looks.
People were impressed with the clarity and detail of our reasoning in the Hub announcement. In addition, because the reference class of co-living projects with the stated purpose of saving the world doesn’t look great, we created several pages of unpublished (and not yet comprehensive) writing on how to mitigate downside risks for such a project, how to create a healthy community, and other topics. We do think this upfront research was sensible. However, in hindsight, doing more small-scale trial-and-error alongside would have gone a long way: Starting with longer and longer incubator-style retreats as the first MVP, finding a core cohort to found a not-yet fully public shared flat with, and iterate and grow from there.
If the startup jargon of MVPs is new to you, here is a remarkably concise and informative writeup by Henrik Kniberg.
Start co-living spaces with a core cohort.
Our latest community plan for Ithaka involved building around a dedicated core cohort of 5-10 residents who sustain a consistent culture and reduce memetic loss through too much flowthrough. With that core group in place, we’d have wanted to invite short-term guests over to bring in new ideas, learn from the longer-term residents, and build their network. Collecting that core cohort through retreats and networking *before* going on the lookout for housing and funding would have made the whole project significantly easier. Here are some reasons:
With a group of people, we could have directly rented a flat together to start an informal group house, instead of going through the paperwork associated with setting up a charity in Germany.
This would have made us significantly less dependent on external funding, and more resilient to the crypto crash earlier this year that hit one of our seed funders hard.
When starting community spaces, work with the local community right from the start.
This is one of the things that are completely obvious in hindsight, but weren’t at the start. Work on the hub started almost a year ago at CEEALAR, a similar space in England. This made sense to me at the time: It gave me first-hand experience with how life in such a community might be, how the group dynamics work, and which routines and rituals may help make it go well.
This gave me the opportunity to learn a lot of non-obvious useful things about community dynamics. For example, in co-living spaces with continuous flowthrough, peoples’ social energy for relating to others and organizing events for the community seems to be highest at the start, and diminishes over time while the sparkle of novelty ceases, people develop their own routines, and become less open to form ever-new bonds with newcomers every other week. This is why we planned to have short-term visitors arrive in fixed three-week-long cohorts at the start of each month. While the long-term guests could have provided memetic and cultural stability, the short-term guests would have had the opportunity to share the sparkle of novelty with one another. And not only the sparkle of novelty: They could have settled into a more work-minded mode synchronously, until all of them leave before the end of the month and the core cohort has a week to fully focus on each other and their own projects. A pulsing motion of expansion and contraction.
At the same time, starting the ideation phase at CEEALAR meant that I couldn’t do any on-the-ground networking in Berlin before the start of the summer, when we moved over to start looking for houses and working on the charity paperwork with a local lawyer. This was a mistake: An outward-facing project this size can hardly be carried by two people. It needs a whole community to grow into and out of. It needs to understand the local needs and priorities, the existing community building endeavors and bottlenecks, etc. While we planned to build close ties to the local community after opening doors, it would have made sense to understand Berlin’s lively local EA community as crucially important stakeholders right from the start. Concretely, it would have made sense to build connections to as many Berlin-based community builders and longtermist-adjacent EAs as possible from the start. This could have left us with a decent core cohort already. In addition, a gears-level understanding of how the Berlin community works would have been useful evidence while thinking about the community plan for the house. At the early stages of the project, a solid local network could have come in handy during the location search: Many good options on the competitive Berlin housing market never make it into public announcements; relationships are everything. That may not be true for the most ambitious versions of this project, but it definitely is for an Aurea-style decentralized group house spread out over several flats.
Many people who filled expressions of interest were thrilled to help make the space happen, but with most tasks being rapidly changing desk work that seemed to need a complete overview of the project, I didn’t find ways to properly leverage the community energy through more delegation. Among others, because most of the interested people were not locals.
This is another thing Aurea seem to have done right: Already their opening party was a lively mix of local EAs, entrepreneurs, and neighbors. They built the community ties first, and the group house itself afterward.
A useful article in this context: Start with “who”.
2. Make extra space for building your co-founder relationship.
Founding a startup is like getting married: You have to talk about your shared vision for the future, figure out how to communicate well with one another, talk about commitments and responsibilities, finances, et cetera. While we as the Hub’s founding team had great chemistry at the start, we found that our styles of working and communication could have been a more perfect match. It would have made sense for us to buckle down and figure all of this out right away instead of over the course of our work together. You may want to have a thorough conversation right at the start of the project about how each of you works best. Identify your individual strengths, weaknesses, and quirks, as well as potential synergies and points of conflicts when they come together. We did this alongside starting out on the object-level work. Next time I’d start such an ambitious project with someone, I might want to go all-in on teambuilding and lock ourselves up in a cabin in the woods for at least a weekend, better a week.
First Round’s 50 questions to Explore with a Potential Co-Founder may be a good start for this. For bonus bulletproofness, finish off by applying CFAR’s murphyjitsu to your co-founder relationship.
Some potentially useful background models:
Google’s Project Aristotle-study on predictors of team performance. They found that the biggest predictor of team performance is whether or not the team members have a feeling of psychological safety, i.e. “an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive.” As I consider this an undervalued key component of EA community building, I’ll write more EAF posts on it in the next months.
Bruce Tuckman’s four stages of group development. In 1965, Bruce Tuckman proposed that groups tend to go through four consecutive stages: Forming, storming, norming, and performing. The bottom line of this model, in one sentence: Instead of dreading and avoiding it, invite team conflict as an opportunity for growth, for developing psychological safety, and for finding effective ways for working together.
3. Funding: Don’t be over-invested in crypto.
This one is the hardest to give advice on. While it’s a fact that the crypto crash hit one of our seed funders and significantly reduced our start capital, it would have been hard for us to do anything different. Withdraw the crypto funding ASAP? That was not feasible without having the charity paperwork in order. Apply to more funding from different sources early in the year to be maximally safe? Definitely.
What next?
Our next step is to wait out how Aurea and the Berlin EA ecosystem in general evolve. Maybe it will make sense to pick this project up again a few years from now, maybe not.
The Conversations We Make Space For
This shifted my opinion towards being agnostic/mildly positive about this public statement.
I’m still concerned that some potential versions of EA getting more explicitly political might be detrimental to our discourse norms for the reasons Duncan, Chris, Liv, and I outlined in our comments. But yea, this amount of public support may definitely nudge grantmakers/donators to invest more into community health. If yes, I’m definitely in favor of that.
Don’t ask what EA can do for you, ask what you can do for EA.
An obvious-in-hindsight statement I recently heard from a friend:
“If I’d believe that me being around was net negative for EA, I’d leave the community.”
While this makes complete sense in theory, it is emotionally difficult to commit to it if most of your friends are in EA. This makes it hard for us to evaluate our impact on the community properly. Motivated reasoning is a thing.
So, it may be wothwhile for us to occasionally reflect on the following questions:
If I were to look back in ten years and find that my presence, in hindsight, was bad for EA. What were the reasons?
Who could I ask for an honest evaluation of which bits of my behavior serve the cause, and which harm it?
If I were to decide that my presence harms the community. How would I get my social needs met anyways?
Thanks for the question!
Here you go for a chunk of the background models which informed our decision:I see three main potential benefits that can come from impact-focused co-living projects like these:
1) Reduced living cost
2) centralizing everyday chores like cooking, cleaning, and restocking to keep peoples’ backs free for work
3) fostering synergies and cross-pollination between residents’ projectsCEEALAR (formerly “EA Hotel”) leverages 1) to the max by pushing the living costs as low as 6500£/year/person (according to memory, I might be off). At the same time, all the restocking and a significant chunk of the cooking and cleaning is taken care of so that people have their backs maximally free for EA work. Meanwhile, CEEALAR doesn’t have a specific cause area focus and doesn’t specifically invest much resources into enabling mentorship for residents and facilitating cooperations. These things are encouraged and do happen, but they are not a key priority. In the three months I have been there so far, the default has been people working on their projects side by side and only occasionally exchanging feedback and plotting shared endeavors over dinner.
As Berlin is significantly more expensive than Blackpool, we won’t be able to leverage the reduced living cost as well as CEEALAR can. At the same time, we are making plans to maximize synergies between residents’ projects. If things go according to my current dreams, The Berlin Hub might turn into an incubator for longtermist research groups and startups within the next years. A bit of diversity is useful for preventing groupthink, but with insufficient overlap between peoples’ subculture and interests, it would make little sense for people to even try collaborate. The filter we are putting into place shall ensure that professional exchange and cooperation between the residents is possible with relatively low effort.
We explicitly don’t want to only hang out with longtermists, but are trying to find a good balance. For example, we plan to run open-to-(EA-)public events at the hub without a specific cause area focus to make sure we don’t only simmer in our own juice. We’ll also encourage residents to mingle with the local EA- and non-EA community. After all, that is one of the reasons we picked Berlin in the first place.
In addition to my personal cause prioritization, I’m doing this because I’m excited about the idea of impact-focused co-living projects in general. I’d be delighted if we manage to deliver a proof of concept that goes beyond what CEEALAR already did and inspire others to try similar things. In fact, I’m already in contact with people from several countries across the globe who have plans for founding EA co-living projects. I’m happy to share my models and network with anyone who wants to do that as well, independent of their cause area focus and specific theory of change.
I only have limited time and would rather do one thing well than ten things badly. In this case, following my personal cause prio and my understanding of the longtermist community’s bottlenecks, the one thing I’m trying to do well is to start a longtermist research group incubator for the Schengen area. Somebody has to run the pizza booth. If my comparative advantage and what excites me most is baking pizza, I believe it would be unwise of me to not focus on making the best pizza in town, but to offer mediocre pizza instead so that I can sell veggie burgers, curry, tacos, hot dogs, pasta, ice cream, and haircuts on the side.
Is that response satisfying? Do let me know if not.
A hedging I’d add: ”...unless these people know each other from outside the boardgame club”.
Kick-off meeting: Doing Things Better—A course in the art of applied rationality
I have 15+ hours experience in running Active Hope workshops, and 50+ as a participant. Happy to chat if anyone wants to dive in more deeply.
Specifically, I’ve done a bunch of thinking on how to adapt the deep ecology-based models of Active Hope workshops to an orthodox AI safety audience. For more info on the general framework, further resources, and a list of 7 self-reflection prompts to try it out, feel free to take a look at this writeup I made for the attendees of a workshop at LessWrong Community Weekend 2022.
Ok these strong down- and disagreement-votes are genuinely mysterious to me now.
The only interpretation that comes to mind is that somebody expects that something bad could come from this offer. I can’t imagine anything bad coming from this offer, so I’d appreciate feedback. Both here where I can react, or in my admonymous is fine.
Thanks, I removed my downvote after reading this comment.
Edit: I no longer agree with the content of this comment. Jason convinced me that this pledge is worth more than just applause lights. In addition, I don’t think anymore that this is a very appropriate place for a slippery slope-argument.
_____________
I’d like to explain why I won’t sign this document, because a voice like mine seems to still be missing from the debate: Someone who is worried about this pledge while at the same time having been thoroughly involved in leftist discourse for several years pre-EA.
So here you go for my TED talk.
I’m not a Sam in a bunch of ways: I come from a working-class background. I studied continental philosophy and classical greek at an unknown small town uni in Germany (and was ashamed of that for at least my first two years of involvement with EA). Though I was thunderstruck by the simple elegance of utilitarian reasoning as a grad student, I never really developed a mind for numbers and never made reading academic papers my guilty pleasure. I’ve been with the libertarian socialists long enough before getting into EA that I’m still way better at explaining Hegel, Marx, Freud, the Frankfurt school, the battle lines between materialist and queer feminism, or how to dive a dumpster than even basic concepts of economy. In short: As far as knowing the anti-racist and anti-sexist discourse is concerned, I may well be in the 95th percentile of the EA community.
And because of all of this life experience, reading this statement sent a cold shower down my spine. Here’s why.
I have been going under female pronouns for a couple of years. That’s not a fortunate position to be in in a small German university city whose cultural discourse is always 10-20 years behind any Western capital city, especially of the anglo-saxon world. I’ve grown to love the feeling of comfort, familiarity, and safety that anti-discriminatory safe spaces provide, and I’ve actively taken part in making these spaces safe—sometimes in a more, sometimes in a less constructive tone.
But while enjoying that safety, comfort, and sense of community, I constantly lived with a nagging half-conscious fear of getting ostracized myself one day for accidentally calling the wrong piece of group consensus into question. In the meantime, I never was quite sure what the group consensus actually was, because I’m not always great at reading rooms, and because just asking all the dumb questions felt like a way too big risk for my standing in the tribe. Humility has not always been a strength of mine, and I haven’t always valued epistemic integrity over having friends.
The moment when the extent of this clusterfuck of groupthink dawned on me was after we went to the movies for a friend’s birthday party: Iron Sky 2 was on the menu. After leaving the cinema, my friend told me that during the film, she occasionally glanced over to me to gauge whether it’s “okay” to laugh about, well, Hitler riding on a T-Rex. She glanced over to me in order to gauge what’s acceptable. She, who was so radically leninist that I didn’t ever dare mention that I’m not actually really all that fond of Lenin. Because she had plenty of other wonderful qualities besides being a leninist. And had I risked getting kicked out of the tribe for a petty who’s-your-favorite-philosopher-debate, that would have been very sad.
On that day, I realized that both of us had lived with the same fear all along. And that all our radical radicalism was at least two thirds really, really stupid virtue signalling. Wiser versions of us would have cut the bullshit and said: “I really like you and I don’t want to lose you.” But we didn’t, because we were too busy virtue signalling at each other that really, you can trust me and don’t have to ostracize me, I’m totally one of the Good Guys(TM).
Later, I found the intersection between EAs and rationalists: A community that valued keeping your identity small. A community where the default response to a crass disagreement was not moral outrage or carefully reading the room to grasp the group consensus, but “Let’s double crux that!”, and then actually looking at the evidence and finding an answer or agreeing that the matter isn’t clear. A community where it was considered okay and normal and obvious to say that life sometimes involves very difficult tradeoffs. A community where it was considered virtuous to talk and think as clearly and level-headedly as possible about these difficult tradeoffs.
And in this community, I found mental frameworks that helped me understand what went wrong in my socialist bubble: Most memorably, Yudkowsky’s Politics is the Mind-Killer and his Death Spirals sequence. I’d place a bet that the majority of the people who are concerned about this commitment know their content, and that the majority of the people who support it don’t. And I think it would be good if all of us were to (re-)read them amidst this drama.
I’m a big fan of being considerate of each others’ feelings and needs (though I’m not always good at that). I’m a big fan of not being a bigot (though I’m not always good at that). Overall, I’d like EA to feel way more like the warm, familiar, supportive anti-discriminatory safe spaces of my early twenties.
Unfortunately, I don’t think this pledge makes much of a difference there.
At the same time, after I saw the destructive virtue signalling of my early 20s play out as it did, I do fear that this pledge and similar contributions to the current debate might make all the difference for breaking EA’s discourse norms.
And by “breaking EA’s discourse norms”, I mean moving them way closer to the conformity pressure and groupthink I left behind.
If we start throwing around loaded and vague buzzwords like “(anti-)sexism” and “(anti-)racism” instead of tabooing our words and talking about concrete problems, how we feel about them, and what we think needs doing in order to fix them, we might end up at the point where parts of the left seem to be right now: Ostracizing people not only when that is necessary to protect other community members from harm, but also when we merely talk past each other and are too tired from infighting to explain ourselves and try and empathize with one another.
I’d be sad about that. Because then I’d have to look for a new community all over again.