I work at CEA on the Community Health team as deputy head of the team.(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
I work at CEA on the Community Health team as deputy head of the team.(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
Effective giving quick take for giving season
This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.
I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, though; I’m not sure what should be done except to be clearer about all this, but I suspect it’s hard to properly convey “this seems like the absolute best thing in the world to do, also next year my view could be that it’s basically useless” even if you use those exact words. And maybe people have done this, or maybe it’s worth trying harder. Another approach would be something like insurance.
A frame I’ve been more interested in lately (definitely not original to me) is that earning to give is a kind of resilience / robustness-add for EA, where more donors just means better ability to withstand crazy events, even if in most worlds the small donors aren’t adding much in the way of impact. Not clear that that nets out, but “good in case of tail risk” seems like an important aspect.
A more out-there idea, sort of cobbled together from a me-idea and Ben West-understanding is that, among the many thinking and working styles of EAs, one axis of difference might be “willing to pivot quickly, change their mind and their life plan intensely and often” vs “not as subject to changing EA winds” (not necessarily in tension, but illustrative). Staying with E2G over many years might be related to having being closer to the latter; this might be an under-rated virtue and worth leveraging.
Meta: I’m writing on behalf of the Community Health and Special Projects team (here: Community Health team) at CEA to explain how we’re thinking about next steps. For context, our team consists of:
Me, Chana Messinger: Normally I specialize (from a community health lens) in EA projects that involve high schoolers or minors, and community epistemics; since November, I’ve been the interim head of the Community Health team
Nicole Ross, the usual team head, who has been focusing on EV US board work since the FTX crisis, and when she transitions back to community health work, she plans to prioritize thinking through what changes should happen in EA given everything that happened with FTX
Julia Wise, who usually serves as a community health contact person for the EA community, but has been working primarily on other projects for a few months
Catherine Low, who serves as a contact person for the EA community among other roles
Eve McCormick, project manager and senior assistant
An affiliate and various contractors
In this comment I’ll sometimes be referring to Effective Ventures (EV) UK and Effective Ventures (EV) US together as the “EV entities” or as Effective Ventures or EV.
Where things stand and next steps:
Someone came to Julia in 2021 with information about possible misconduct by Owen Cotton-Barratt, a few years after the events they were reporting. Julia took steps at the time in response, described here. In 2021, when Nicole became her manager, Julia told Nicole that there were concerns about Owen’s behavior, but as far as they remember Julia didn’t share many details at the time.
Earlier this month, after reading the TIME piece, Julia filled me and Nicole in on more details, and then later we informed the rest of the Community Health team about what had happened. We’re now looking back on whether Julia or Nicole made mistakes in handling this, and whether we should change things about our processes going forward.
As the post notes, an external firm is going to give us their independent assessment; I think this is important, and I’m grateful to the trustees of EV UK and EV US for helping to organize it. There will also be an internal reflection process. Julia and Nicole are going to do retrospectives on this situation, which will then get discussed with me, Ben West (as transition coordinator at CEA), and some senior management and/or trustees of the EV entities, possibly looping in others at CEA or EV as well.
Further steps are yet to be decided (and some will depend on the information we learn), but could include having other members of the team do assessments of the process and decision-making in this situation and getting opinions on this situation and our approach generally from other people who do similar or analogous work, in and out of EA.
Things we will be keeping in mind as we reflect:
potential conflicts of interest, the role of power in EA, and our own incentives as a team
that crucial details can differ between people or be misremembered over years
that the best response to a pattern of making people uncomfortable (for example) can be different from the best response to an isolated incident
that there are important selection effects on what we get to hear, and that we certainly don’t have all the information we would ideally want to have
We are also going to continue our normal work. We are available for calls concerning issues in the community, and you can reach out to us via relevant team members’ emails or our form (which can be anonymous). Feel free to also use the general form or forms for specific team members to give us feedback, questions or other thoughts and perspectives, on this situation or more generally.
Forms
Chana’s form
Nicole’s Admonymous
Catherine’s Admonymous
Julia’s form
We are also considering many possibilities for proactive work to make the EA community safer and better at dealing with this kind of situation (some of which are already happening, and will continue).
If instead you’d like to share thoughts or feelings about this situation to someone not on the team, Habiba Islam, Luzia Bruckamp and Rockwell Schwartz have all kindly volunteered to be listening ears not working at CEA. (Habiba works for 80,000 Hours, which along with CEA is an Effective Ventures project, and Rockwell is paid via CEA Community Building Grant. Luzia is an EA community member who volunteered to help on Twitter.) If you have feedback for the Community Health team you’d like them to pass on, they’re happy to do that, at whatever level of anonymizing / aggregating you wish. They are all volunteering for this additional work, so may have limited time slots available, but will communicate that with you. (Note that these people were asked in an informal capacity and have not been formally assessed or trained by our team.)
There are also resources external to our team that may be useful, such as those compiled by RAINN.
I’m going to do my best in the comments to answer questions people have, with all the obvious caveats about ones I can’t answer or won’t be able to answer quickly.
My heart goes out to everyone who has suffered from sexual harassment or misconduct in this community. I’m sorry, and I care deeply about making sure our team is equipped to handle these issues well.
As the post says above, I’d like to share updates the team has made on its policies based on the internal review we did following the Time article and Owen’s statement as a manager on the team and the person who oversaw the internal review. (My initial description of the internal review is here). In general, these changes have been progressing prior to knowing the boards’ determinations, though thinking from Zach and the EV legal team has been an important input throughout.
Changes
Overall we spent dozens of hours over multiple calendar months in discussions and doing writeups, both internally to our team and getting feedback from Interim CEA CEO Ben West and others. Several team members did retrospectives or analyses on the case, and we consulted with external people (two EAs with some experience thinking about these topics as well as seven professionals in HR, law, consulting and ombuds) for advice on our processes generally.
From this we created a list of practices to change and additional steps to add. The casework team also reflected on many past cases to check that these changes were robust and applicable across a wide variety of casework.
Our changes, in rough order of importance:
Defaulting more often to getting input from more people on the team (especially if a case involves someone with power in the EA community) by having multiple caseworkers on a case and/or involving managers for oversight
Specifically, most cases now get seen by more than one person, particularly cases that involve someone with sufficient power in the community (confidentiality constraints may limit how many caseworkers work on a case.)
For the cases we take on, we have a lower bar for proactively reaching out to members to the community in order to get more information. For example, if this same case came up again, we would weigh more highly the importance of getting additional information from the woman Zach mentions above. Our previous approach pointed in this direction as well, but this change will make it less likely we miss an important opportunity.
We are creating a system that automatically flags to us when complaints about the same person come to different caseworkers, making us more likely to catch patterns of poor behavior (this previously was caught in a less systematic way)
We have lowered the bar for what’s considered a personal conflict of interest and improved our systems so it is easier to pass a case on to another caseworker. For instance, if the same case came up again, we would have it owned by someone else.
We’ve added and improved written internal guidelines for considerations to take into account during cases, particularly for cases involving someone with power in the EA community. Previously these were distributed across different documents and tacit understanding; they are now more formalized.
Automatic reminders for follow-ups on old cases to see if any more action is needed
An update towards overcommunication and explicitness when giving feedback to people whose behavior we’d like to change; an update against assuming the feedback has been understood and incorporated
An update towards escalations to boards, legal teams and HR departments where relevant.
Better systems (reminders, templates in meetings, etc) for making sure balls aren’t dropped based on when or how information comes in
The casework side of our team meets monthly to check that we are using our systems effectively, that we have implemented these changes with new and existing cases, and to update our internal guidelines as needed.
Some notes on the changes
I’m glad we have these changes, though I think it’s worth noting my guess that a big part of what will end up driving change here is also in large part due to CH team members’ individual reflections on the original events (Time article + statement from Owen + original case).
The list of changes is not exhaustive, since many of the changes stemming from this internal review (depending on your bar for “policy change”) are quite small e.g. “adding a consideration to our list of factors that may make a case more serious.” I think those “nudge” type changes are important and useful for incorporating data without over-updating or over-bureacratizing, though they are harder to assess externally. In general I’m trying to convey our high level changes without creating a laundry list. Nonetheless I hope the descriptions of our process changes help people make accurate updates and have accurate expectations about our team.
There is some broader context, which is that the casework team’s approach in January of 2023 (before the Time article came out) was already in significant ways different than in 2021. For instance, since Catherine joined in late 2021 (and even more so since Charlotte joined in 2023), the casework team has been able to double-check cases more often and have a lower bar for passing on cases where there was a personal connection. The changes as a result of the internal review add on to these previous shifts.
We’ll be checking in as a team at regular intervals to reflect on the value of these changes at getting better outcomes. This refers both to whether the increased time and effort per case looks like the right call (relative to e.g. handling more cases) and whether the decisions made are overall better (rather than the changes overfitting to this specific instance).
What’s still underway
In addition to the changes in how we handle personal COIs (with some person one team member has a COI with), we’re still working on a general approach to institutional COIs and what constitutes one, e.g. if a case involves someone in EV leadership. So far our planned approach is to ask for a person external to EV to give an independent view when such an institutional COI comes up in a case we take on, but it’s not obvious that there will always be an appropriate person willing. This will need more work to figure out.
As Zach indicates above, we’ll be working on a more formalized version of our current processes for escalation, COIs, and anything else the boards ask for.
I think there’s a lot of truth to the points made in this post.
I also think it’s worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces—are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I’m hoping that doesn’t happen!)
Second flag is that I don’t know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it’s worth it for them.
On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they’ll end up making.
Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable career capital in the future for ever doing EA things outside of EA or bringing valuable skills and knowledge to EA (like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?). This concern will need to be fleshed out empirically and will vary a lot in applicability by person.
(I work on CEA’s community health team but am not making this post on behalf of that team)
I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns.
High level
I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strategy develops. I’d like to give some thoughts here that hopefully give some clarity, and we might communicate more about how we see our role in the future.
For a high level description of our work: We aim to address problems that could prevent the effective altruism community from fulfilling its potential for impact. That looks like: taking seriously problems with the culture, and problems from individuals or organizations; hearing and addressing concerns about interpersonal or organizational issues (primarily done by our community liaisons); thinking about community-wide problems and gaps and occasionally trying to fill those; and advising various actors in the EA space based on the information and expertise we have. This work allows us to address specific problems, be aware of concerning actors, and give advice to help the community do its best work.
Context on our responses
Sometimes we have significant constraints on what we can do and say that result in us being unable to share our complete perspective (or any perspective at all). Sometimes that is because people have requested that we keep some or all information about them confidential, including what actions our team has taken. Sometimes it is because us weighing in will increase public discussion that could be harmful to some or all of the people involved. This information asymmetry can be particularly tricky when someone else in the community shares some information about a situation that we think is inaccurate or is only a small part of the picture, but we’re not in a position to correct it. I’m sorry for how frustrating this can be.
I imagine this might end up being relevant to responses to this comment (and which and how and when we respond to them), so I think it’s useful to highlight.
I’ll also flag that many of our staff are at events for the next two weeks, so it might be an especially slow time for Community Health responses.
About what to expect
I think some of the disagreements here come from different understanding of what the Community Health team’s mission is or should be. We want to hear and (where possible) address problems in the community, at the interpersonal, organizational, and community levels. But we often won’t resolve a situation to the satisfaction of everyone involved, or do everything that would be helpful for individuals who were harmed. Ben mentions people “hoping that it [Community Health] will pursue justice for them.” I want to be totally upfront that we don’t see pursuing justice as our mission (and I don’t think we’ve claimed to). In the same vein, protecting people from bullies is sometimes a part of our work, and something we’d always like to be able to do, but it’s not our primary goal and sadly, we won’t always be able to do it.
We don’t want people to have a false impression of what they can expect from talking to us.
Sometimes people come to us with a picture of what they’d like to happen, but we won’t always take the steps they hope we’ll take, either because 1) we don’t agree that those steps are the right call, 2) we’re not willing to take the steps based on the information we have (for example if we don’t have their permission to ask for the other person’s side of the story), or 3) the costs (time, legal risk etc) are too great. We generally explain our considerations to the people involved, but could probably communicate better about this publicly, and as we continue thinking about strategic changes, we’ll want to give people an accurate picture of what to expect.
(At other times people come to us without specific steps they’d like us to take. Sometimes they think something should be done, but don’t know what is feasible, other times they share information as “I don’t think this is very bad and don’t want much to be done, but I thought you should know and be able to look for patterns”, which can be quite helpful.)
We talk about confidentiality and what actions we might be able to take by default in calls. Typically this results in people deciding to go forward with working with us, but some people might decide that what we’re likely to be able to provide isn’t a good match for their situation.
I don’t think the downside of a false sense of security people might get from our team’s existence is strong enough to counteract the benefits.
It’s true that we rarely write up our findings publicly. I don’t take that as damning since I don’t think that is or should be the default expectation. I think public writeups can be a valuable tool in some cases, but often there are good reasons to use other tools instead.
One main reason is the large amount of time they take — Ben pointed out that he didn’t necessarily endorse how much time this project took him, but that it was really hard to do less.
I agree with Ben that we aren’t the EA police. We have some levers we can pull related to advising on a number of decisions, and we do our best to use these to address problems and concerns. I think describing occasions that we use the information we have as “rare” is very much not reflective of the reality of our day-to-day work.
I’m sad to read in some comments that we didn’t satisfy people’s needs or wants in those situations. I’m very open to receiving feedback, concerns or complaints in my capacity as a manager on the team—feel free to message me on the forum or email me (including anonymously). I recognize someone not wanting to talk to the Community Health team might not want to share feedback with that same team, but I want the offer available for anyone who might. You can also send feedback to CEA interim CEO Ben West here.
I also think not feeling satisfied with our actions is plausibly a normal outcome even if everything is going well—sometimes the best available choice won’t make everyone (or anyone) happy. I definitely want people to come in expecting that they might not end up happy with our choices (though I think in many cases they are).
Again, if people think we’re making wrong calls, I’m interested to hear about it. Under some circumstances we can also re-review cases.
Regarding trust
We’re aware that some people might feel hesitant to talk to us (and of course, it’s entirely up to them). There are many understandable reasons for this (even if our team was flawless). Our team isn’t flawless, though, which means there are likely additional cases where people don’t want to talk to us, which I’m sad about. I don’t know how much of a problem this is.
In particular, we are worried to hear that some people didn’t feel that they’d be treated with respect (I can’t tell if they mean by our team or the general institutional network we’re a part of, or something else). In this case, it sounds like potentially they aren’t confident we’d handle their information well or treat them respectfully. If that is what they meant, that sounds like a bad (and potentially stressful) situation and I’m really sorry to hear about it. I could imagine there being a concerning pattern around this that we should prioritize learning about and working on. If at any point people wanted to share information on the reasons they wouldn’t talk to us, I’m interested (including anonymously—here for the community liaisons, here for me personally and here for Ben West, interim CEO of CEA).
People might also worry that we’d negatively update our perception of them if they were implicated in something. (This is one of the reasons people might not want to speak to us that might be implied by this post, though I am not at all sure this is what was meant). I don’t currently think we should have a strict policy of amnesty for any concerning information people provide about themselves, though we in fact try hard to not make people regret talking to us. (Strict amnesty of that kind would probably result in less of us doing things about issues we hear about and make Ben’s concerns worse rather than better, though I haven’t gone and researched this question.)
In general, we care a lot about not making people regret speaking to us and not pressuring people to do or share more than they’re comfortable with. These are big elements of why we sometimes do less than we’d like, since we don’t want to take actions they’re not comfortable with, or push them to stay involved in a situation they’d like to be done with, or to do anything that would cause them to be worried we might inadvertently deanonymize them.
My general sense (though of course there are selection effects here) is that people who talk to our team in person or on calls about our decision making often end up happier and finding us largely reasonable. I haven’t figured out how to do that at scale e.g. in public writing.
Thanks all for your thoughts and feedback.
Interestingly, a friend in academia claims the norms are much much better there. I certainly would guess there’s just more general acceptance of hooking up with people in your community in EA versus professional communities, though I suspect that in e.g. queer or feminist communities there’s tons of dating and hookups.
I really appreciate the content and tone of this; I want us to have a lot of integrity in our responses, and keep cultivating it.
(Edited to add month later: I really liked the intensity and the connection to consequentialist reasons to care about deontological and virtue ethical considerations. I have updated that there was a sweepingness to this post I might not endorse and I suspect I got swept up in appreciation that the EA community had people who were going to stand strong and condemn bad behavior, over and above the specifics of the argument made).
Strong upvote. I remember when someone I knew was being dragged on the internet, and I found some of the things they’d said really upsetting to me and my moral sensibilities, and I found it really helpful (without it necessarily changing my mind on how bad those things were!) for a friend to help me reflect on how much I had or hadn’t yet priced in the selection filter of “find the worst things anyone has ever written or that they’ve ever done”. (Sometimes it’s right to judge people on the worst thing they’ve ever done, but I suspect not often).
Similarly, in the last 8 months of working at an EA org / in the EA community, it’s been really helpful to be able to understand what’s unusual about the orgs and community, and what’s incredibly standard boilerplate (which might be good or bad—lots of normal ways of doing things are stupid) - talking to people from journalism, politics, etc has been great for contextualization.
For some reason, this post about criticisms of the Gates foundation has been really sticking with me.
From my observations. I recognize many of these in myself. Definitely not a complete list, and possibly some of these things are not very relevant, please feel free to comment to add your own.
Interpersonal and Emotional
Fear, on all sides (according to me lots of debates are bravery debates and people on “both sides” feel in the minority and fighting against a more powerful majority (and often it’s both true, just in different ways), and this is really important for understanding the dynamics)
Political backlash
What other EAs will think of you
Just sometimes the experience of being on the forum
Trying to protect colleagues or friends
Speed as a reaction to having strong opinions, or worrying that others will jump on you
Frustration at having to rehash arguments / protect things that should go without saying
Desire to gain approval / goodwill from people you’d like to hire/fund/etc you in the future
Desire to sound smart
Desire to gain approval / goodwill from your friends, or people you respect
Pattern matching (correctly or not) to conversations you’ve had before and porting over the emotional baggage from them
Sometimes it helps to assume the people you’re talking to are still trying to win their last argument with someone else
Low trust environment
Surprise that something is even a question
I think there’s a nasty feedback loop in tense situations with low trust. (This section by Ozzie Gooen)
People don’t communicate openly their takes on things.
This leads to significant misunderstanding.
This leads to distrust of each other and assumptions of poor intent.
This leads to parties doing more zero-sum or adversarial actions to each other.
When any communication does happen, it’s inspected with a magnifying glass (because of how rare it is). It’s misunderstood (because of how little communication there has been).
The communicators then think, “What’s the point? My communication is misunderstood and treated with hostility.” So they communicate less.
Not tracking being scrupulously truth-telling out of a desire to get less criticism
Not feeling like part of the decision making process, opaqueness of the reasoning of EA leadership
Not understanding how and why decisions that affect you are made
Feeling misunderstood by the public, sometimes feeling wilfully misunderstood
Something to protect / Politics
Trying to protect a norm you think matters
Trying to protect other people you think are being treated unfairly
Trying to create the EA you want by fiat / speech actions
Power / game theoretical desires to have power shift in EA towards preferred distribution
Speed—a sense that the conversation will get away from you otherwise
Organizational politics
An interest in understanding the internals of organizations you’re not part of
An interest in not-sharing the internals of organizations you are part of
just saying what everyone knows out loud (copied over with some edits from a twitter thread)
Maybe it’s worth just saying the thing people probably know but isn’t always salient aloud, which is that orgs (and people) who describe themselves as “EA” vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray.
Especially for newer or less connected people, I think it’s important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of different people and orgs, which from afar might blur into “they have the EA stamp of approval”
Probably a lot of thoughtful people think whatever seems shiny in a “everyone supports this” kind of way is bad in a bunch of ways (though possibly net good!), and that granularity is valuable.
I think feel very free to ask around to get these takes and see what you find—it’s been a learning experience for me, for sure. Lots of this is “common knowledge” to people who spend a lot of their time around professional EAs and so it doesn’t even occur to people to say + it’s sensitive to talk about publicly. But I think “some smart people in EA think this is totally wrongheaded” is a good prior for basically anything going on in EA.
Maybe at some point we should move to more explicit and legible conversations about each others’ strengths and weaknesses, but I haven’t thought through all the costs there, and there are many. Curious for thoughts on whether this would be good! (e.g. Oli Habryka talking about people with integrity here)
I think we have a bunch of unusual norms, but I’d prefer to:
A. Focus on the norms that are causing the most harm (like maybe the professional/personal overlap has some interesting potential interventions that take seriously how enmeshed things are and how costly it is not to be able to date in your community, this is just one potential intervention point, I don’t have a strong take yet on what the highest leverage thing is). I will say that romance within organizations I think it’s already taken pretty seriously in my experience, and then the question is the network/all the social overlap, which is much more complicated and has a whole spectrum of how intense the overlap is (grantmakers are in a different position, some people are independent researchers and more/less reliant on organizational leadership goodwill, the list goes on)
B. Treat intervening there as an experiment, and pull back if we think it’s not worth it
Overall pushing back on weirdness seems to me trying to address too broad of a thing without focusing on the highest leverage parts, and might take away from things that feel important, plus could end up instantiated cruelly, though that’s not my crux. Also seems subject to getting to be used as a weapon, that will change as social mores change.
I also notice in myself a people pleaser streak, that wants the world at large to like me and my community, and I think that makes me more likely on the margin to want to make changes that people on the outside want me to make, and I want to be tracking that and not let it run the show, relative to things I think are actually good ideas (including incorporating my outside view! And including things I think are good for being upstanding allies/trading partners who might give something up that we value for that alliance.) I suspect I am not alone in having this trait, so might be helpful to track in general.
For people who consider taking or end up taking this advice, some things I might say if we were having a 1:1 coffee about it:
Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage.
The culture here is full of energy and ambition and truth telling. That’s really awesome, but it can be a tricky adjustment. In some spaces, you’ll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don’t think you’re a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important.
People both are and seem really smart. It’s easy to just believe them when they say things. Remember to flag for yourself things you’ve just heard versus things you’ve discussed at length vs things you’ve really thought about yourself. Try to ask questions about the gears of people’s models, ask for credences and cruxes. Remember that people disagree, including about very big questions. Notice the difference between people’s offhand hot takes and their areas of expertise. We want you to be someone who can disagree with high status people, who can think for themselves, who is in touch with reality.
I’d recommend staying grounded with friends/connections/family outside the EA space. Making friends over the summer is great, and some of them may be deep connections you can rely on, but as with all new friends and people, you don’t have as much evidence about how those connections will develop over time or with any shifts in your relationships or situations. It’s easy to get really attached and connected to people in the new space, and that might be great, but I’d keep track of your level of emotional dependency on them.
We use the word “community” but I wouldn’t go in assuming that if you come on your own you’ll find a waiting, welcoming pre -made social scene, or that people will have the capacity to proactively take you under their wing, look out for you and your well being, especially if there are lots of people in a similar boat. I don’t want you to feel like you’ve been promised anything in particular here. That might be up to you to make for yourself.
One thing that’s intense is the way that the personal and professional networks overlap, so keep that in mind as you think about how you might keep your head on straight and what support you might need if your job situation changes, you have a bad roommate experience, you date and break up with someone (maybe get a friend’s take on the EV of casual hookups or dating during this intense time, given that the emotional effects might last a while and play out in your professional life—you know yourself best and how that might play out for you).
This might be a good place to flag that just because people are EAs doesn’t mean they’re automatically nice or trustworthy, pay attention to your own sense of how to interact with strangers.
I’d recommend reading this post on power dynamics in EA.
Read CS Lewis ’s The Inner Ring
Feeling lonely or ungrounded or uncertain is normal. There is lots of discussion on the forum about people feeling this way and what they’ve done about it. There is an EA peer support Facebook group where you can post anonymously if you want. If you’re in more need than that, you can contact Julia Wise or Catherine Low on the community health team.
As per my other comment, some of this networking is constrained by capacity. Similarly, I wouldn’t go in assuming you’ll find a mentor or office space or all the networking you want. By all means ask, but also also give affordance for people to say no, respect their time and professional spaces and norms. Given the capacity constraints, I wouldn’t be surprised if weird status or competitive dynamics formed, even within people in a similar cohort. That can be hard.
Status stuff in general is likely to come up; there’s just a ton of the ingredients for feeling like you need to be in the room with the shiniest people and impress them. That seems really hard; be gentle with yourself if it comes up. On the other hand, that would be great to avoid, which I think happens via emotional grounding, cultivating the ability to figure out what you believe even if high status people disagree and keeping your eye on the ball.
This comment and this post and even many other things you can read are not all the possible information, this is a community with illegibility like any other, people all theoretically interacting with the same space might have really different experiences. See what ways of navigating it work for you, if you’re unsure, treat it as an experiment.
Keep your eye on the ball. Remember that the goal is to make incredible things happen and help save the world. Keep in touch with your actual goals, maybe by making a plan in advance of what a great time in the Bay would like, what would count as a success and what wouldn’t. Maybe ask friends to check in with you about how that’s going.
My guess is that having or finding projects and working hard on them or on developing skills will be a better bet for happiness and impact than a more “just hang around and network” approach (unless you approach that as a project—trying to create and develop models of community building, testing hypotheses empirically, etc). If you find that you’re not skilling up as much as you’d like, or not getting out of the Bay what you’d hoped, figure out where your impact lies and do that. If you find that the Bay has social dynamics and norms that are making you unhappy and it’s limiting your ability to work, take care of yourself and safeguard the impact you’ll have over the course of your life.
We all want (I claim) EA to be a high trust, truth-seeking, impact-oriented professional community and social space. Help it be those things. Blurt truth (but be mostly nice), have integrity, try to avoid status and social games, make shit happen.