What’s the theory of change of “Come to the bay over the summer!”?

Some EAs from the bay area like to invite people they find especially promising “to come to the bay over the summer” and “learn stuff and skill up”. It’s often very unclear to the other person what exactly the summer activity is that is being referred to here and why exactly it has to be done in the bay. It was very unclear to me. I’ve come to the bay area now and it has had tremendous benefits for me. So now I’ll try to lay out a theory of change for “coming to the bay over the summer” that helps other people assess whether this is something they want to do.

I’ve specifically come to Berkeley, which is the general EA hotspot around here, and this is where my points are most applicable. Also, the effects of coming to the bay are pretty diffuse and hard to nail down, which is why I’m going to list a lot of factors.

1. All-year EAG

There’s lots of cool and impressive EAs around, especially alignment researchers. You can reach out to them and ask for a 1-1 chat, like at EAGs.

There’s three advantages coming to Berkeley has over EAG as well:

  • Cool and impressive people are usually booked out at EAGs.

  • Coming to Berkeley and, e.g., running into someone impressive at an office space already establishes a certain level of trust since they know you aren’t some random person (you’ve come through all the filters from being a random EA to being at the office space).

  • If you’re in Berkeley for a while you can also build up more signals that you are worth people’s time. E.g., be involved in EA projects, hang around cool EAs.

2. Great people

Besides impressive and well-known EAs, the bay also has an incredibly high concentration of amazing but less-well-known EAs. Sometimes you simply chat with someone and it turns out to be hugely valuable even though you don’t know the person and wouldn’t have known to reach out to them. The average person you interact with here is probably smarter and has better models of EA than the average person where you are based. This means you get better input from the outside in all sorts of ways, including:

  • Knowledge embedded implicity in EA culture. E.g., I learnt a lot about building my own models, considering weird ideas, and decreasing coordination costs with high trust through implicit EA culture

  • Better ideas in general because smart people have smart ideas (often)

  • Better input on your career plans and projects (which is where high context on EA is especially useful). Generally, if you have thoughts or ideas related to EA, talking to lots of people about them should likely be your immediate next step. You will rapidly improve, adjust, or discard them, and not talking to people will just slow you down a lot.

  • Spicier takes that require a high level of EA buy-in. (Depends on you whether you think this is good.)

Caveat: Berkeley EA is a subculture and like every subculture it’s an echo chamber. There’s implicit knowledge in this culture, but so is there cultural baggage that has somehow gotten caught in the echo chamber. Not every quirk of Berkeley EA is especially smart or rational. Some may be harmful. However, I maintain it’s a better-than-usual echo chamber with better-than-usual quirks.

3. Networking

Going to office spaces, dinners, parties etc. in Berkeley will give you a lot of networking. Networking is sort of the precursor to the last point, great people who give you great input. However, that’s not the only thing networking affords you and hence I put it here as a separate point. Being networked also affords you favors, lets you exert influence on important decisions and if/​how things are done in Berkeley EA projects, and by extension EA as a whole. Being networked also leads to serendipitous encounters with even more people (which is easier if you already know a lot of people).

Meeting in-person in Berkeley lends itself much better to developing any sort of connection to people and assessing their potential than doing it virtually. (You want people to be able to assess your potential so they will want to give you cool opportunities.)

4. Shifting deprecated defaults and intuitions

If you have intellectually changed your mind, but not emotionally, you will not be very effective at acting in accordance with your new view. If you’ve updated from valuing your time at $20/​h to $100/​h, but you still feel like your time is cheap, you’ll often intuitively make bad tradeoffs. Surrounding yourself with people who act in accordance with the correct defaults and intuitions will shift you more towards those emotionally.

Some especially important defaults and intuitions are:

  • An intuition for the value of your time

  • Defaulting to viewing things in the light of EA. When you want to start a new project, buy a new thing, move to a new place, EA might not immediately occur to you as a consideration to take into account in your decision. Maybe you’d like to have this default though

  • Defaulting to maximising impact instead of satisficing. Asking “What is the most impactful EA project I could start?” instead of trying to get some EA job

  • An intuition for the gravity of x-risk. Feeling this emotionally, not just knowing it intellectually

5. Information flows

There’s some information that’s hard to get quickly and in a distilled fashion the further you are from EA hubs. I’m not entirely clear on what the mechanisms are that make this so. I will just describe some of the information of this kind.

Landscape knowledge: Information on the landscape of EA, or the field of alignment, or biosecurity etc. Including: The relevant people and what they do, the existing projects/​orgs and how they think about the problem, the bottlenecks of the space as a whole, disagreements between people or projects/​orgs, recent developments/​the trajectory the space is on as a whole, hot takes floating around.

Landscape knowledge is insanely useful. Without knowing the whole space, you can’t make informed decisions about what sub-space you want to work in (Alignment agent foundations? Empirical research?). Without knowing the bottlenecks of the space, you don’t know what’s most urgent to work on. Without knowing about relevant infrastructure, you don’t know of all the support available in different places.

I want to put special emphasis on knowing about disagreements in the field because I personally was uninformed about those for a long time within alignment. I was planning to become a technical alignment researcher, but hadn’t realised how different the premises are different alignment efforts (MIRI, ARC, Anthropic, …) are built upon, making work on the wrong agenda effectively useless on some worldviews. I could’ve easily ended up digging into, say, MIRI’s research, only to realise very late that I actually think their approach is hopeless.

How to get hiring ready: For some organisations this is clearer than for others. Either way, it helps talking to several people who have gotten hired recently by the organisation in question or who are on a good track to get there.

Opportunities: My experience has been that I’ve learned about more opportunities the closer I’ve been to EA hotspots physically and socially. So, so many fellowships, grants, scholarships, jobs, internships, retreats, summits, contests, jobs, invitations to just go and stay somewhere for a while…

Generally, I see the state of information flows in EA as highly suboptimal. I’d like to work on this. If you do too, please get in touch.

6. Space for the important stuff

In normal life, you can never give space and time to Effective Altruism according to its importance. Things come up, work, friends, interests, easier ways to spend your free time. Coming to Berkeley is helpful because it sets aside weeks or months for thinking about important stuff. And the environment here pushes you towards thinking about important stuff, not away from it.

The idea of making good money-time tradeoffs for example, was just some idea from the internet for me back in my home environment. Some weird idea as well, that felt unintuitive and socially unacceptable with the people around me. The incentive landscape was just not favourable at all to me engaging with this idea seriously and giving it space to unfold. In Berkeley on the other hand, people will bring it up to you, or it will come up because of the money-time tradeoffs people make all the time, and people want to hear your takes, and the space is made for you to think about this idea automatically. Not to mention that, whether this is a good thing or not, hearing an idea from a person is just much more engaging psychologically than reading it on a forum.

Usually all this space is also used in above-average ways since (related to 2.) the smart and highly engaged EAs around act as a filter for ideas, such that the highest quality ideas get amplified the most in Berkeley (roughly).

7. Moving towards doing ambitious EA work

(This is technically a child node of Information flows + Shifting deprecated defaults and intuitions, but it’s so important that it warrants a separate heading.)

It’s unclear from the outside:

  • How desperately in need of people every EA project is, and how many projects never get started because there’s no one around to own them

  • How easy it is to start a project and how secure this is relative to starting ambitious things outside of EA. Funding, advisors, a high-trust community, and social prestige are available

  • How close you personally are to doing EA work /​ starting an EA project. People tend to overestimate how competent/​experienced other people are who get cool jobs or start cool projects. Meeting these people in Berkeley helps you internalise this: everyone’s clueless

  • What’s possible. Looking at what scale EA projects in the bay operate at disperses false notions of limits and helps shoot for the correct level of ambition

Even once you know these things intellectually, it’s hard to act in accordance with them before knowing them viscerally, e.g., viscerally feel secure in starting an ambitious project. Coming to Berkeley really helps with that.

8. Motivational effects

Interacting with passionate, value-aligned people on a daily basis feels very motivating and nourishing. Being able to talk about your work with others and get excited together is nice. I personally have never worked so much in my entire life and it’s by choice. Working from an EA office is ideal for these things, and also increases your productivity by taking care of meals, environmental design, various charger types, and other logistical hassle. The fact that you get social approval for being impactful also doesn’t hurt.

9. An amazing community

I personally really love the EA and rationalist community here. There’s an amazing concentration of smart and interesting people. There’s people geeking out about the concept of agency or value-alignment or consciousness, people discovering emotional work together, people doing weird shit like ecstatic dance, and lots of other types of people as well. Being part of this community has been incredibly valuable to me and has made me even more committed to EA. I’ve made many great friends here.

Concluding remarks

Possibly, some of these benefits could come from just talking to EAs influenced by the bay area, and not travelling there yourself. Probably less than 50% of the benefits though.

If this theory of change speaks to your current needs/​bottlenecks, and you’ve been convinced to try coming to the bay, please contact me explaining where you are currently at and how Berkeley might help you. You can also apply for a call with Akash to speak about your plans.