13 background claims about EA

I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs.

In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub.

Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are “Akash’s opinions” as opposed to “background claims.”)

Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community.

General

1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years).

2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity’s long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/​future will be shaped by the extent to which these systems are aligned with human values.

3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.)

4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/​socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/​opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.)

AI Safety

5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly).

6. There is widespread disagreement in AI safety about which research agendas are promising, what the core problems in AI alignment are, and how people should get started in AI safety.

7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List).

8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you’ve learned, and visiting an EA hub).

9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project, and you can apply even if you’re new. Grant applications often take 1-2 hours. Check out the long-term future fund.

10. LessWrong is better than the EA Forum for posts/​discussions relating to AI safety (though the EA Forum is better for posts/​discussions relating to EA culture/​strategy)

Getting Involved

11. The longtermist EA community is small. There are not tons of extremely intelligent/​qualified people working on the world’s most pressing issues. There is a small group of young people with relatively little experience. We are often doing things we don’t know how to do, and we are scrambling to figure things out. There is a lot that needs to be done, and the odds that you could meaningfully contribute are higher than you might expect. (See also Lifeguards)

12. Funders generally want to receive more applications. (I think most people should have a lower bar for applying for funding).

13. If you want to get involved but you don’t see a great fit in any of the current job openings, consider starting your own project (get feedback and consider downside risks, of course). Or consider reaching out to EAs for ideas (if you’re interested in longtermism or AI safety, feel free to message me).

I am grateful to Olivia Jimenez, Miranda Zhang, and Christain Smith for feedback on this post.