People who are substantially harmed by a movement typically don’t tell the community builders of that movement that they’re leaving because they were substantially harmed. They give some other, less vulnerable reason. Some examples of this could be “lack of culture fit or interpersonal conflict” or “burnout/mental health”, two of the major cited factors in the linked sequence of why people leave.
More stories of harm from people getting involved in EA and then bouncing. I tried to do some investigation into this in the past, and it’s obviously by definition a hard population to interview, but my sense is that substantial harms are relatively rare.
Can you say more what investigation you did?
Maybe some people argued from a position of entitlement. I skimmed the comments you linked above and I did not see any entitlement. Perhaps you could point out more specifically what you felt was entitled, although a few comments arguing from entitlement would only move me a little so this may not be worth pursuing.
The bigger disagreement I suspect is between what we think the point of EA and the EA community is. You wrote that you want it to be a weird do-ocracy. Would you like to expand on that?
Torres was banned for 20 years according to the link.
OP and most current EA community work takes a “Narrow EA” approach. The theory of change is that OP and EA leaders have neglected ideas and need to recruit elites to enact these ideas. Buying castles and funding expensive recruitment funnels is consistent with this strategy.
I am talking about something closer to a big tent EA approach. One vision could be to help small and medium donors in rich countries spend more money more effectively on philanthropy, with a distinctive emphasis on cause neutrality and cause prioritization. This can and probably should be started in a grassroots fashion with little money. Spending millions on fancy conferences and paying undergraduate community builders might be counter to the spirit and goals of this approach.
A fee-paying society is a natural fit for big tent EA and not for narrow EA.
I didn’t know that the huge amounts of power held by OP was my main point! I was trying to use that to explain why EA community members were so invested in the castle. I’m not sure I succeeded, especially since I agree with @Elizabeth’s points that no one needs to wait for permission from OP or anyone else to pursue what they think is right, and the EA community cannot direct OP’s donations.
To the first: Yup, it’s one answer. I’m interested to hear other ideas too.
Structure vs restructuring: My point was that a lot of the existing community infrastructure OP funds is mislabelled and is closer to a deep recruitment funnel for longtermist jobs rather than infrastructure for the EA community in general. So for the EA community to move away from OP infrastructure wouldn’t require relinquishing as much infrastructure as the labels might suggest.
For example, and this speaks to @Jason’s comment, the Center for Effective Altruism is primarily funded by the OP longtermist team to (as far as I can tell) expand and protect the longtermist ecosystem. It acts and prioritizes accordingly. It is closer to a longtermist talent recruitment agency than a center for effective altruism. EA Globals (impact often measured in connections) are closer to longtermist job career fairs than a global meeting of effective altruists. CEA groups prioritize recruiting people who might apply for and get OP longtermist funding (“highly engaged EAs”).
Briefly in terms of soft and hard power:
Deferring to OP
Example comment about how much some EAs defer to OP even when they know it’s bad reasoning.
OP’s epistemics are seen as the best in EA and jobs there are the most desirable.
The recent thread about OP allocating most of its neartermist budget to FAW and especially its comments shows much reduced deference (or at least more openly taking such positions) among some EAs.
As more critical attention is turned towards OP among EAs, I expect deference will reduce further. I particularly hope EAs pay attention to David Thorstad’s writings on biorisk and OP’s funding of (ahem) low quality papers on that topic (1).
I expect this will continue happening organically, particularly in response to failures and scandals, and the castle played a role in reduced deference.
I agree no one is turning down money willy-nilly, but if we ignore labels, how much OP money and effort actually goes into governance and health for the EA community, rather than recruitment for longtermist jobs?
In other words, I’m not convinced it would require restructuring or just structuring.
A couple of EAs I spoke to about reforms both talked about how huge sums of money are needed to restructure the community and it’s effectively impossible without a megadonor. I didn’t understand where they were coming from. Building and managing a community doesn’t take big sums of money and EA is much richer than most movements and groups.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
Of course this depends on what one’s vision for the EA community is.
What do you think?
I feel similarly to Jason and JWS. I don’t disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
Just a quick comment that I strong upvoted this post because of the point about violated expectations in EA recruitment, and disagree voted because it’s missing some important points of why EAs should be concerned about how OP and other EA orgs spend their EA money.
I wouldn’t recommend a unilateral action unless I really trusted the other parties involved.
What do you see as the risk of building a bridge if it’s not reciprocated?
David Thorstad has written 3 posts so far casting doubt on whether biorisk itself is as plausible as EAs think it is: https://ineffectivealtruismblog.com/category/exaggerating-the-risks/biorisk/
I don’t know if the grant information is accurate (there’s a disclaimer on the page), but if it is, this is pretty shocking. I would appreciate clarification on this.
Thanks for writing this. It’s really useful work.
I agree there’s a bias where the points more popular people make are evaluated more generously, but in this case I think the karma is well deserved. The COI point is important, and Linch highlights its importance with a relevant yet brief personal story. And while the comment was quick for Linch to make, some people in the EA community would hesitate to point out a conflict of interest in public for fear of being seen as a troublemaker, so the counterfactual impact is higher than it might seem. I strongly upvoted the comment.
Linch, I believe you wrote elsewhere here that you wish people had engaged with you charitably, instead of focusing on possibly flawed word choice. I have tried to do this with you, although I feel you haven’t always returned the favor (uncharitable assumptions about my motivations/background, mischaracterizing my comments). You contested there was an element of racism in your comment and I gave you a simple, non-legalese outline of why I think so. In response to this, instead of engaging with my point, you asked me an extremely basic question about how to define racism, a question I had already partially addressed multiple times in how it applies here.
My gut reaction was that this was a defensive reaction and you weren’t interested in engaging, you just wanted to seem not racist and win an online debate.
Of course, my gut could be wrong. So I asked you where you were coming from. And I’m glad to hear you seem to be genuinely interested in learning whether you made mistakes here.
Unfortunately, I am not interested in the type of debate you’re setting up. I gave you a simple outline earlier of where I was coming from and you are welcome to engage with it.
I’m surprised by this question. Can you explain what prompted it? I think I’ve been pretty clear that I don’t think your comment was motivated by (1).
Let’s imagine your charitable hypothesis was true and titotal was a non-native speaker who misread some comments due to lack of familiarity with the language. When they pushed back on something you said, you condescendingly asked them if they were a native speaker and ignored everything else they said. This is a tactic with a racist element.
I’m sorry you felt offended by my comment. A few points:
I do not think you’re a racist or were trying to be racist, or that race was on your mind when making that comment. I thought you were feeling misunderstood by titotal and mistakenly thought this was a good way to push back. I said there are no upsides and plenty of downsides to your comment and suggested that you be more direct with your actual problem with titotal instead. “If you’re feeling hopeless about conversing with someone or feeling misunderstood, say that instead.”
Your defense about this being the most charitable interpretation you can think of doesn’t engage with any of the points above. A “charitable” explanation that is unlikely to be relevant even if true is just not worth much, nor did you ask your question in a way to make it easy for an actual non-native speaker to admit to a potential vulnerability if that was going on. I read your comment as a passive-aggressive “Can’t you read?” attack which carelessly used language issues as a shield against being called out for being an attack.
I’ve seen a previous similar comment you made and ignored it at the time, especially since (as you say somewhere here) you could have easily been a non-native speaker yourself. But because it had seemingly moved from a one-off comment to a pattern that you thought was justified, I’m glad I pushed back on it.
I did not call you racist and neither did Akhil. We called out issues with your comment. I hope you are mindful of the difference.
I am sympathetic to a general point about native speakers scolding a non-native speaker for not being inclusive enough in their language, but you are making some assumptions in applying it here.
As an unrelated point, I personally hope whether you listen to someone or not isn’t founded on whether they display competent moral reasoning, but I’m unsure what you meant by this.
I agree with Akhil. There is no benefit to the comment you wrote and plenty of downside. If you’re feeling hopeless about conversing with someone or feeling misunderstood, say that instead. Condescendingly implying someone who disagrees with you isn’t good enough at English because they’re not a native speaker is a terrible response.
I did some more research and 20 complaints a year of varying severity is typical, according to what Julia Wise told TIME magazine for their article:
Wise, whose role at CEA involves overseeing community well-being, tells TIME she has fielded roughly 20 complaints per year in her seven years on the job, ranging from uncomfortable comments to more serious allegations of harassment and more. But with no official leadership structure, no roster of who is and isn’t in the movement, and no formal process for dealing with complaints, Wise argues, it’s hard to gauge how common such issues are within EA compared to broader society.