Strong upvote!
I’m constantly putting some effort into automatizing information flows.
E.g., I asked an EA Berlin community member to write a how-to on finding housing in Berlin, because I get that question at least once a month.
If you have more ideas for how to automatize such things, I’d be excited to read about them.
Severin
No hero worship at all intended, sorry if it came off like that. I agree with you that way too much of that happens in EA. Rockwell’s “On living without idols” is with quite some distance my favorite piece on the EA Forum, and one of my favorite texts on all of the internet.
I’m one of the ~1% of EAs who have a natural tendency to ask for favors too leniently rather than too cautiously, so I would have appreciated knowing these things earlier. The core target audience of this post is people like me.
However, I do think the things I write here might be useful for people outside this group as well: In my understanding, a significant number of people outside my specific subset of neurodivergence tend to just pick up the meme of “better not waste central peoples’ time and attention!” without ever putting much explicit thought into why things are generally done that way. So, I wanted to make explicit the practicalities behind that intuition, to demystify it and make reaching out to busy people more actionable.
I may have failed in that, I’m still in the process of learning to cater my writing to all the different sub-audiences within EA at once. Thanks for pointing out that the intended humor in my exaggerated Christiano-example wasn’t apparent.
Thanks! Yep, I’m definitely an outlier in EA regarding how much I don’t care about authority.
I added section 7 a couple hours after publication to account for feedback on the lesswrong side of this post. Now also added a disclaimer at the start:
“Note: The intended message of this post is not “Don’t reach out to busy people!”, but “Do reach out, and have these things in mind to make it more likely to get a response/if you don’t get one.” ”
Since writing this, I’ve done a bunch more debating and thinking about how to handle romantic attraction in communities I’m actively involved in responsibly. So, here’s the rule I want to commit to from now on:
In any community I’m involved in, I won’t be the one driving romantic escalation (or hinting at it) with anyone lower in the institutional hierarchy than me. This applies within 1 month after low-intensity interactions like a 90min workshop and 3 months after high-intensity interactions like a retreat where I was in a lead facilitator role.
Some specifications:
1. Both formal and informal hierarchies count. For example, attendees of workshops I facilitate pro bono or during unconferences still count as “lower in the hierarchy”.
2. Responding to advances people lower in hierarchies make towards me is fine. (Unless other reasons make that seem unethical.)
3. Escalation can only happen if and only if the other person signals at least an obvious 6 on the Decide10 scale. I.e., a lack of proactiveness counts as a “no”.
4. Galaxy brain slytherining a la “I’ll just make friends for now and set things up so that they are more likely to propose to me later on.” or “You knoooow, I committed to a certain rule because I’m SUCH an ethical person, so if you were to have interest in me, you’d have to be the one to make the first step *wink wink*” is prohibited.
5. I might adjust this rule over time as evidence accumulates, but only *after* consulting with people I trust in these matters.
6. I think a version of this might help us handle EA’s gender imbalance better: It might be good if if heterosexual men in general would just accept/decline advances from women, and not proactively flirt themselves.
I agree with that statement, and I didn’t intend to make either of those claims.
I think a more steelmanned version of my initial claim would be that there’s a particular type of struggling that corresponds to low-integrity behavior, and that some aspects of current EA culture make it more likely for people to struggle in that particular way. Even (and maybe especially) if they are generally caring and well-meaning and honestly dedicated to the cause.
I think “scarcity mindset” is an okay handle.
A postrationalist friend also pointed out that what I’m talking about corresponds to Buddhism’s realm of hungry ghosts. In modern psychological reinterpretations of Buddhist mythology, that describes a mode of existence people can get stuck in when they develop the wrong kind of rumination. Basically, always being very aware of lack and what’s missing and being desperate to fill that up.
I’m not sure yet how useful either of these handles will turn out—but yet again, this whole post is an intellectual work in progress and I only reposted it here because people on Facebook found it surprisingly insightful.
Yup, I definitely overgeneralized here and may be completely off. I think there’s something where I’m pointing at, and this helps me clarify my thinking. So thanks.
Generally: I by no means want to demonize anyone for struggling. To a significant extent, I buy into a social model of mental health, and mostly see one person’s struggling as a symptom of their whole surrounding (social and other) being diseased.
My intention behind this post was to point out some ways in which I think EA is suboptimally organized. The rough claim I was aiming for is this: “It’s easier to be a saint in paradise, so let’s make EA a bit more paradisic by fixing some of our norms.”
Yep, I agree with that point—being untrustworthy and underresourced are definitely not the same thing.
I partially agree.
I love that definition of elites, and can definitely see how it corresponds to to how money, power, and intellectual leadership in EA revolves around the ancient core orgs like CEA, OpenPhil, and 80k.
However, the sections of Doing EA Better that called for more accountability structures in EA left me a bit frightened. The current ways don’t seem ideal, but I think there are innumerable ways how formalization of power can make institutions more rather than less molochian, and only a few that actually significantly improve the way things are done. Specifically, i see two types of avenues for formalizing power in EA that would essentially make things worse:
Professional(TM) EA might turn into the outer facade of what is actually still run by the now harder to reach and harder to get into traditional elite. That’s the concern I already pointed towards in the post above.
The other way things could go wrong was if we built something akin to modern-day democratic nation states: Giant sluggish egregores of paperwork that reliably produce bad compromises nobody would ever have agreed to from first principles, via a process that is so time-consuming and ensnaring to our tribal instincts that nobody has energy left to have the important truth-seeking debates that could actually solve the problems at hand.
Personally, the types of solutions I’m most excited about are ones that enable thousands of people to coordinate decentralizedly around the same shared goal without having to vote or debate everything out. I think there are some organizations out there that have solved information flows and resource allocation way more efficiently than not only hierarchical technocratic organizations like traditional corporations, socialist economies, or the central parts of present-day EA, but also more efficiently than modern democracies.
For example, in regards to collective decisionmaking, I’m pretty excited about some things that happen in new social movements, the organizations that Frederic Laloux described (see above, or directly on https://reinventingorganizationswiki.com/en/cases/), or the Burning Man community.
A decisionmaking process that seems to work in these types of decentralized organizations is the Advice Process. It is akin to how many things are already done in EA, and might deserve to be the explicit ideal we aspire to.
Here’s a short description written by Burning Nest, a UK-based Burning Man-style event:
“The general principle is that anyone should be able to make any decision regarding Burning Nest.
Before a decision is made, you must ask advice from those who will be impacted by that decision, and those who are experts on that subject.
Assuming that you follow this process, and honestly try to listen to the advice of others, that advice is yours to evaluate and the decision yours to make.”
Of course, this ideal gets a bit complicated if astronomical stakes, infohazards, the unilateralist’s curse, and the fact that EA is spread out over a variety of continents and legal entities enter the gameboard.
I don’t have a clear answer yet for how to make EA at large more Advice Process-ey, and maybe what we currently have actually is the best we can get. But, I’m currently bringing the way EA Berlin works closer and closer to this. And as I’ve already learned, this works way better when people trust each other, and when they trust me to trust them. The Advice Process is basically built on top of the types of high-trust networks that can only emerge if people with similar values are also allowed to interact in non-professional ways.
Therefore, if we optimize away from making the personal/professional overlap work, we might rob ourselves of the possibility to implement mechanisms like the Advice Process that might help us solve a bunch of our coordination problems, but require large high-trust networks to work effectively. Other social movements have innovated on decisionmaking processes before EA. It would just be too sad if we wouldn’t hold ourselves to higher standards here than copying the established and outdated management practices of pre-startup era 20th century corporations.
- Apr 25, 2023, 11:37 PM; 1 point) 's comment on EA might systematically generate a scarcity mindset that produces low-integrity actors by (LessWrong;
“How does your last point fit in there though?”
On second thought, I covered anything that’s immediately relevant to this topic in section 2.2, which I quickly expanded from the Facebook post this is based on. So yea, 3. should probably be a different EA Forum post entirely. Sorry for my messy reasoning here.
I’ll add more object-level discussion of 3. under Kaj Sotala’s comment.
Thanks for writing this up. I agree with most of these points. However, not with the last one:
I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well.
If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:
Funded community builders would possibly have even stronger incentives to prioritize community growth over sustainable planning, accountability infrastructure, and community health. To my knowledge, CEA’s past funding policy incentivized community builders to goodhart on acquiring new talent and funds, at the cost of building sustainable network and structural capital, and at the cost of fostering constructive community norms and practices. As long as one avoided to visibly damage the EA brand or turn the very most talented individuals off, it just was financially unreasonable to pay much attention to these things.
In other words, the financial incentives so far may have forced community builders into becoming the hard-core utilitarians you are concerned about. And accordingly, they were forced to be role models of hard-core utilitarianism for those they built community for. This may have contributed to EA orthodoxy pre-FTX collapse, where it seemed to me that hard-core utilitarianism was generally considered synonymous to value-alignedness/high status.
I don’t expect this problem to get better if the bar for getting/remaining funded as a community builder gets higher—unless the metrics change significantly.Access to informal networks would become even more crucial than it already is. If we take money out of community building, we apply optimization pressure away from welcomingness/having low entry barriers to the community. Even more of EA’s onboarding and mentorship than is already the case will be tied to informal networks. Junior community members will experience even stronger pressure to try and get invited to the right parties, impress the right people, to become friends and lovers with those who have money and power.
Accordingly, I suspect that the actual answer here is more professionalization, and into a different direction. Specifically:
Turning EA community building from a career stepstone into a long-term career, with proper training, financial security, and everything. (CEA already thought of this of course; I don’t find the relevant post.)
Having more (and more professionalized) community health infrastructure in national and local groups. For example, point people that community members actually know and can talk to in-person.
CEA’s community health team is important, and for all I know, they are doing a fairly impressive job. But I think the bar for reaching out to community health people could be much lower than it currently is. For many community members, CEA’s team are just strangers on the internet, and I suspect that all too many new community members (i.e. those most vulnerable to power abuse/harassment/peer pressure) haven’t heard of them in the first place.Creating stronger accountability structures in national and local groups, like a board of directors that oversees larger local groups’ work without being directly involved in it. (For example, EA Munich recently switched to a board structure, and we are working on that in Berlin ourselves.)
For this to happen, we would need more experienced and committed people in community building. While technically, a board of directors can be staffed by volunteers entirely, withdrawing funding and prestige from EA community building will make it more difficult to get the necessary number of sufficiently experienced and committed people enrolled.
Thoughts, disagreement?
(Disclaimer on conflict of interest: I’m currently EA Berlin’s Community Coordinator and fundraising to turn that into a paid role.)
A hedging I’d add: ”...unless these people know each other from outside the boardgame club”.
“We established a policy that established members, especially members of the executive, were to refrain from hitting on or sleeping with people in their first year at the society.”
This sounds super reasonable for EA, too. How would you enforce/communicate this?
Full disclosure, because without it, this post would be a bit phony: I haven’t always followed this policy within EA or outside, and took just one or two weeks from first thinking it might be good to implement it in EA to writing this post.
In general, if I write about community dynamics, assume that I think about them this thoroughly not because I’m extraordinary virtuous and clear-sighted in regards to people stuff, but because I’m sometimes socially a bit clumsy and all these models and methods help me function at a level that just comes naturally to others. The question guiding my posts on community dynamics is generally something like: “What would I-from-ten-years-ago have needed to know to not make the same mistakes I did?”
Yep, I’m with Xavier here. The rule incentivizes community builders a bit to not make EA their only social bubble (which is inherently good I think). And it is not without workarounds, all of which cushion the addressed problem.
For example, it encourages local community builders to hand over event facilitation to others more often. And if the rule is publicly known, participants can take a break from events that one leader leads to get around the rule. If participants don’t know the rule, they’d get informed about its existence when they hit on an organizer. In either case, the consequence of even intentionally working around the rule would be taking it slow.
Yup, “don’t hit on people who don’t hit on me first.” is a weaker rule I already decided to adhere to in EA before I started thinking about the one outlined in this post. Independent of power, it just seems utterly necessary to manage the gender imbalance.
Yep, the problem this particular rule tries to fix is that of perceived power imbalance and all the troubles that come with it.
It is an imperfect proxy for sure, but non-proxy rules like “No dating if there is a perceived power imbalance.” are very, very prone to tempt people into motivated reasoning. It can get very hard for humans to evaluate their power imbalance with Alice when oh damn are these freckles cute. False beliefs, from the inside, feel not like beliefs, but like the truth. Because of that, I wouldn’t trust anyone with power who would trust themselves with power.
Note also that while “Bob has power over Alice’s career” is a significant component of how power works in EA, power in humans has many more subtle nuances than factual access to resources. Even without explicit concerns like “If I don’t do what Bob wants, Bob will make my career progression harder.”, power is shiny and overpowering and does all kinds of funny things to our monkey brains. See for example how our brains automatically adjust what we consider good fashion choices to who we deem popular in our particular subcultural bubble, how we mold our habits by them, etc.
For a more crass example, the 20th century had its wealthy share of spiritual leaders with sex scandals. Though e.g. Osho had no power over his followers’ real-world careers, they worshipped him like a demigod. I think it goes without question that it would be if not impossible at least outstandingly difficult for him to have a truly consensual relationship with one of his followers. Because there’s no true “yes” without an easy “no”, and there’s no easy “no” if the prophet himself calls you to his quarters.
(Which is of course very sad and inconvenient for Osho and requirement to adhere to this rule might have turned him off guruing completely, because the list of documented 20th century female gurus is short.)
I know that the rule is non-negotiable for people who facilitate retreats under the AuthRev brand.
AuthRev is rather influential in the (especially north american) AR scene, so I wouldn’t be surprised if the rule seeped out further from there. I’m not well-networked enough there to know the details. And even if I could, I don’t think I’d want to share the saucy stories that lead to people adjusting the timelines upward and downward until they found their current form.
Thanks a lot! Yep, a question I always ask myself in EA’s diversity discussions is “Which kind of diversity are we talking about?”
A LessWrong post on the topic you might like if you didn’t read it yet is Kaj Sotala’s “You can never be universally inclusive”.
Thanks! Yep, that is totally in line with the fact that the Karma score of the post here is much more mixed than on LessWrong, which definitely is an Askier sphere than EA.