Kelsey clarified in a tweet that if someone asks for a conversation to be off the record and she isn’t willing to keep it off the record, she explicitly tells them so.
Presumably he made some unfounded assumptions about how sympathetic she’d be and whether she’d publish the screenshots, but never asked her not to.
Jenny K E
I learned about this ten months ago, personally, and (in an informal peer context) spoke to one of the people involved about it. The person in question defended the decision by saying they intended to run retreats and ask “Hamming questions”. They added that the £15m was an investment, since the castle (“technically it’s not a castle”) wouldn’t depreciate in value. Also, they opined that the EA community as a whole shouldn’t have a veto on every large purchase, because consensus decision-making is infeasible on that scale and is likely to result in vetos for tons of potentially valuable proposals.
I think I agree with the third point at least to some extent, but that’s a meta level point, and the object level points did not seem like good arguments for buying a £15m castle. I came away from the conversation believing this was not a reasonable use of funds and with my opinion of CEA* lowered.
I didn’t, and still don’t, know what to do about this sort of thing. Changing how an EA org acts is hard; maybe public pressure helps, but I suspect a lot of the difficulties are in changing organizational norms and policies, and I don’t have a good sense of what would be useful policies or what wouldn’t be. I do have the intuition that having a larger number of distinct EA orgs would be good, so CEA has less influence individually.
*I understand CEA to be an umbrella organization housing a number of sub-orgs, and so I remain unsure how far this negative update should propagate; certainly I’m sure there are folks who work in other branches of CEA who had nothing to do with this and no say in it.
[ETA: Changed “their decision” to “the decision” upon receiving a reminder that the person in question was (probably) not the person who had actually made the original decision to buy the castle.]
Thank you for writing this! I’ve been somewhat skeptical that ATLAS is a good use of EA funding myself, but also don’t know very much about it, so I appreciate someone who’s more familiar with it and its fellows starting this conversation.
My fairly uninformed best guess is that the rumors listed here are a bit misleading / suggestive of problems being more extreme than they actually are, but that these problems do exist. But this is just a guess.
Thanks so much for writing this. As someone interested in starting to do community building at a university, this was helpful to read, especially the Alice/Bob example and the concrete advice. I do really think that EA could stand to be less big on recruiting HEAs. I think there are tons of people who are interested in EA principles but aren’t about to make a career switch, and it’s important for those people to feel welcome and like they belong in the community.
I was going to write “I kind of wish this post (or a more concise version) were required reading for community builders,” and then I thought better of it and took actions about it—namely, sent the link as feedback to the EA Student Group Handbook and made an argument that they should incorporate something like this into their guide for student groups.
I think maybe that the balance I’d strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says “I was harmed by X, but I think the consequences of me reporting this should not include consequence Y” then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don’t want. Usually the sorts of unwanted consequences in question are things like “I’m afraid of backlash if someone tells X that I’m the one who reported them” or “I’m just saying this to help you establish a pattern of bad behavior by X, but I don’t want to be involved in this so don’t do anything about it just based on my report.” But this sort of nonintervention request might also be made by victims whose point of view is “I think X is doing really impactful work, and I want my report to at most limit their engagement with EA in certain contexts (e.g., situations where they have significant influence over young EAs), not to limit their involvement in EA generally.” In other words, leave impact considerations to the victim’s own choice.
I’m not sure this the right balance. I wrote it with one specific real example from my own life in mind, and I don’t know how well it generalizes. But it does seem to me like any less victim-friendly positions than that would probably indeed be worse even from a completely consequentialist perspective, because of the likelihood of driving victims away from EA.
[ETA: Whoops, realized this is answering a different question than the one the poster actually asked—they wanted to know what individual community members can do, which I don’t address here.]
Some concrete suggestions:
-Mandatory trainings for community organizers. This idea is lifted directly from academia, which often mandates trainings of this sort. The professional versions are often quite dumb and involve really annoying unskippable videos; I think a non-dumb EA version would encourage the community organizer to read the content of the community health guide linked in the above post and then require them to pass a quiz about its contents (but if they can pass the quiz without reading the guide that’s fine, the point is to check they understand the contents of the guide, not to make them read it). I imagine that better but higher-effort/more costly versions of this test would involve short answer questions (“How would you respond to X situation?”); less useful but lower-effort versions would involve multiple choice questions. To elaborate, the short answers version forces people to think more about their answer but also probably requires a team of people to read all these answers and check if they’re appropriate or not, which is costly.
-some institution (the community health team? I dunno) should come up with and institute codes of conduct for EA events and make sure organizers know about them. There’d presumably need to be multiple codes of conduct for different types of events. This ties in to the previous bullet since it’s the sort of thing you’d want to make sure organizers understand. This is a bit of a vague and uninformed proposal—maybe something like this already exists, although if so I don’t know about it, which at minimum implies that if it exists it ought to be more widely advertised.
-maybe a centralized page of resources for victims and allies, with advice, separate from the code of conduct? Don’t know how useful this is
-every medium/large EA event/group should have a designated community health point person, preferably female though not necessarily, who makes a public announcement that if someone makes you uncomfortable you can talk to the point person and with your permission they’ll do what’s necessary to help, and then follows through if people do report issues to them. They should also remind/inform everyone of the role of Julia Wise, and, if someone comes to them with an issue and gives permission to pass it on to her and her team, do that. (You might ask, if this point person is probably just gonna pass things on to Julia Wise, why even have a point person? The answer is that reporting is scary and it can be easier to report to someone you know who has some context on the situation/group.)
Furthermore, making these things happen has to explicitly be someone’s job, or the job of a group of someones. It’s much likelier to actually happen in practice if it is someone’s specific responsibility than if it’s just an idea some people talk about on the Forum.
Something I don’t think helps much is: trying to tell all EAs that they should improve their behavior and stop being terrible. This won’t work because unfortunately, self-identifying EAs aren’t all cooperative nice individuals who care about not harming others personally. They don’t have incentives to change just because someone tells them to, and worse offenders on these sorts of issues are also very likely to not be the sorts of people who want to read posts like this one about how to do better. That said, I think that posts on this subject that are more helpful are posts that include lots of specific examples or advice, especially advice for bystanders.
I think the separate Community tab is a great idea, thanks for implementing that!
Not about the current changes, but a bit of unrelated site feedback: The “library” button at the bottom of mobile leads to what seem to be a set of curated essays and sequences, which is good, but the sequences listed at the top are overwhelmingly on the topic of AI safety, which seems pretty unbalanced—I’d like to see this tab contain a mix of curated reading recommendations on global poverty, animal welfare, biorisk, AI safety, and other cause areas.
Yes, I agree with what you’ve written here. “This comes from a place of hurt” was actually meant as hedging/softening; “because you have had bad experiences it makes sense for your post to be angry and emotionally charged and it should not be held to the same epistemic standards as a typical EA Forum post on a less personal issue.” Sorry that wasn’t clear.
My response was based on my impressions from several years being a woman in EA circles, which are that these issues absolutely do exist and affect an unfortunately high number of women to various extents, and also that some of what’s described in this post is atypically severe. (Obviously, none of this should ever happen, to any degree of severity, and I really want to see EA get better at preventing it!) Originally, I wasn’t clear on the fact that the post was written as a personal report of harm experienced, and that its descriptions of the severity were not intended as universal claims about what is typical. The author has now made a number of edits which make the scope/intent of the post much clearer, thereby obviating much of my comment. I agree that the idea of “endorsing” someone’s report of their own experience is not useful for the reasons you describe, and on further reflection I do want to be more careful in future to respond to reports of harm in ways that don’t disincentivize reporting—that is the last thing I want to do!
My favorite fields of math are abstract algebra, algebraic topology, graph theory, and computational complexity. The latter two are my current research fields. This may seem to contradict my claim of being a pure mathematician, but I think my natural approach to research is a pure mathematician’s approach, and I have on many occasions jokingly lamented the fact that TCS is in the CS department, instead of in the math department where it belongs. (This joke is meant as a statement about my own preferences, not a claim about how the world should be.)
Some examples of specific topics I’ve found particularly fun to explain to people: the halting problem, P vs NP and the idea of poly-time reductions, Kempe’s false proof of the four-color theorem, the basics of group theory.
Yep, you are totally right about availability bias and I don’t mean to downplay at all your experience—that’s awful and I’d be delighted to see more efforts by EA groups to prevent this sort of thing.
And yeah, if you don’t feel like optimizing for argumentative quality that’s valid and my comment isn’t worth minding in that case! Not your job to fix these issues, and thank you for taking the time to bring awareness.
This is a big part of why I used to basically not go to official in-person EA events (I do somewhat more often nowadays after having gotten more involved in EA, though still not a ton). It makes sense that EA events are like this, because after all, EA is the topic that all the people there have in common, but it does seem a bit unfortunate for those of us who like hanging out with EAs but aren’t interested in talking about EA all the time. Maybe community event organizers should consider occasionally hosting EA events where EA is outright banned as a discussion topic, or if that’s too extreme, maybe just events where there’s some effort to create/amplify other discussion topics?
As someone in a somewhat similar position myself (donating to Givewell, vegetarian, exploring AI safety work), this was nice to read. Diversifying is a good way to hedge against uncertainty and to exploit differing comparative advantages in different aspects of one’s life.
I’d suggest “LEA,” which is almost as easy to type as EA.
The gender identity question includes options that aren’t mutually exclusive; I believe it should either be a checkbox question or should list something along the lines of “cisgender woman, transgender woman, cisgender man, transgender man, nonbinary, other.” If you have more questions, feel free to PM me and I’m happy to do my best (as an ally) to answer them.
Absolutely agree with everything you’ve said here! AI safety is by no means the only math-y impactful work.
Most of these don’t quite feel like what I’m looking for, in that the math is being used to do something useful or valuable but the math itself isn’t very pretty. “Racing to the Precipice” looks closest to being the kind of thing I enjoy.
Thank you for the suggestions!
Thanks for writing this! I had that “eugh” feeling up until not that long ago, and it’s nice to know other people have felt the same way.
I’m particularly enthusiastic about more educational materials being created. The AGISF curriculum is good, and would have been very helpful to me if I’d encountered it at the right time. I’d be delighted to see more in that vein.
Elaborating on this, thanks to Spencer Becker-Kahn for prompting me to think about this more:
From a standpoint of my values and what I think is good, I’m an EA. But doing intellectual work, specifically, takes more than just my moral values. I can’t work on problems I don’t think are cool. I mean, I have, and I did, during undergrad, but it was a huge relief to be done with it after I finished my quals and I have zero desire to go back to it. It would be—at minimum unsustainable—for me to try to work on a problem where my main motivation for doing it is “it would be morally good for me to solve this.” I struggle a bit with motivation at the best of times, or rather, on the best of problems. So, if I can find something in AI safety that I think is approximately as cool as what I’m currently doing, I’ll do it, but the coolness is actually a requirement, because I won’t be successful or happy otherwise. I’m not built for it (and I think most EAs aren’t; fortunately some of them have different tastes than I do, as to what is or isn’t cool).
I am not intellectually motivated by things on the basis of their impactfulness. If I were, I wouldn’t need to ask this question.
[Epistemic status: I’ve done a lot of thinking about these issues previously; I am a female mathematician who has spent several years running mentorship/support groups for women in my academic departments and has also spent a few years in various EA circles.]
I wholeheartedly agree that EA needs to improve with respect to professional/personal life mixing, and that these fuzzy boundaries are especially bad for women. I would love to see more consciousness and effort by EA organizations toward fixing these and related issues. In particular I agree with the following:
> Not having stricter boundaries for work/sex/social in mission focused organizations brings about inefficiency and nepotism [...]. It puts EA at risk of alienating women / others due to reasons that have nothing to do with ideological differences.
However, I can’t endorse the post as written, because there’s a lot of claims made which I think are wrong or misleading. Like: Sure, there are poly women who’d be happier being monogamous, but there are also poly men who’d be happier being monogamous, and my own subjective impression is that these are about equally common. Also, “EA/rationalism and redpill fit like yin and yang” does not characterize my experiences within the EA movement at all. I’m sure there are EAs who are creeps that subscribe to horrible beliefs about gender, but the vast majority of EAs I know are not like that at all. In a similar vein, regarding the claim “many men, often competing with each other, will persuade you to join polyamory using LessWrong style jedi mindtricks while they stand to benefit from the erosion of your boundaries”—I completely agree that this is absolutely awful if/when it happens, but I also think this is a lot less common than this post makes it sound.
Overall, the post seems to do a mixture of pointing out legitimate problems and making angry overarching accusations that I don’t think are true. I believe this post comes from a place of hurt, and I am sincerely sorry that you’ve had such negative experiences. I really do want the EA community to improve at this, and I want the people who’ve given you such bad experiences to be appropriately dealt with so that they don’t harass others in future. However, I don’t think this post as written will help much, because the overarching accusations are likely to turn people off from taking the rest of the post seriously.
[ETA: Wanted to add that the supportiveness and collaborative brainstorming suggested in the thread above by Megan, Keerthana, and Rockwell totally do seem helpful and productive to me, and I am excited to see this happening.]
[Second ETA: This comment was written in response to an earlier version of this post. Since then the author has made several edits which make what I’ve said here somewhat irrelevant.]