A brainstorm of meta-EA projects (MCF 2023)

This post is part of a sequence on Meta Coordination Forum 2023. It summarizes pre-event survey respondents’ brainstorming on projects they’d like to see.

You can read more about the pre-event survey results, the survey respondents, and the event here.

About this survey section

We solicited project proposals from Meta Coordination Forum (MCF) 2023 attendees by asking a few closely related, optional questions. These included:

  • What new projects would you like to see in EA?

  • What obviously important things aren’t getting done?

  • What projects should have existed a year ago?

  • What’s a “public good for EA” that nobody has a direct incentive to do but that would benefit lots of people?

The resulting list is not a definitive list of the best meta-EA projects; it’s more like a brainstorm and less like a systematic evaluation of options.

  • Respondents filled in their answers here quickly and may not endorse them on reflection.

  • Respondents probably disagree with each other. We never asked respondents to evaluate the suggestions of others, but we’re pretty sure that if we had it would have revealed big disagreements. (There was significant disagreement on most other survey questions!)

  • The value of these projects depends on how well they are executed and who owns them.

If someone is interested in taking on one of these projects and would like to connect with the person who proposed it, please reach out. We may be able to put you in touch.

Project Proposals

Coordination and Communication

  1. Projects focused on improving connections to groups outside EA (i.e. government, companies, foundations, media, etc.).

  2. A common knowledge spreadsheet of directly responsible individuals for important projects.

  3. More “public good”-type resources on the state of different talent pipelines and important metrics (e.g., interest in EA).

  4. More coherent and transparent communication about the funding situation/​bar and priorities.

  5. More effort going into identifying and making known low-integrity actors through some transparent mechanism.

  6. More effort into improving boards and reducing conflicts of interest across organizations /​ boards.

  7. More risk management capacity for EA broadly as a field and not just individual orgs.

Career Advice and Talent Allocation

  1. Advanced 80K: Career advice targeted at highly committed and talented individuals.

  2. A separate organization that is an 80k analogue for mid-career people.

Community Engagement

  1. More creative and fun ways for young people to learn about EA principles that don’t place as much emphasis on doing “the single most important thing”.

  2. More support and appreciation for people doing effective giving work (GWWC, Longview), and encouragement for others to do more of this.

  3. A survey to identify why high-value potential members “bounce off” EA.

  4. A better way to platform great community members who can promote virtues and key principles.

AI Safety

  1. AI Safety Next Steps: A guide to facilitate entry into AI safety research and activism.

  2. Something to help people understand and evaluate the actions of AI labs, and possibly critique them.

  3. An org that can hire and lightly manage independent researchers.

  4. A better understanding of the relevance of UK or EU AI policy on x-risk, and comparison to US policy.

  5. A really good book on AI risk.

  6. AGISF in workshop form.

  7. More AIS grantmaking.

  8. A public policy institution advocating straightforwardly for the case of existential risk from AI.

Evaluation and Accountability

  1. More charity evaluators.

  2. More measurement and evaluation/​accountability of meta projects.

  3. A public EA impact investing evaluator.

Fundraising and Donor Engagement

  1. More work on donor cultivation and fundraising.

  2. A new grantmaker with various beneficial attributes like speed, judgment ability, and infrastructure.

  3. More community building for effective giving.

Education and Training

  1. Systematic educational/​training materials and community building in areas outside AIS.

  2. Leadership fast-track program.

Media and Outreach

  1. A podcast to keep people updated on EA-related developments.

  2. A bunch of media platforms for sharing EA ideas (YouTube, podcast, Twitter, etc.).

  3. An analog of Non-trivial but for university students.

  4. Better on-ramps to the most impactful career paths.

Diversity and Inclusion

  1. An organization that specializes in improving ethnic, racial, and socioeconomic diversity within EA.

Other Initiatives

  1. A high-quality longtermist incubator.

  2. EAG-like cause-specific conferences.

  3. Fastgrants and other quick funding mechanisms.

  4. A post-FTX investigative unit.

  5. An awards program to create more appreciation within the community.

  6. More badass GHD obvious wins like Wave.

  7. An initiative that helps people prepare for crunch time and crises.

  8. More applied cause-prioritization work outside of Open Philanthropy.

  9. More critiques of views closely associated with Open Philanthropy funding.

  10. Cause-specific community-building organizations, analogous to what CEA does for EA.

(Reversed) What is a project or norm that you don’t want to see?

  • Incubators: One respondent stated that incubators are “super hard and over-done,” mentioning that they are too meta and often started by people without entrepreneurial experience.

  • Making Donor Participation Onerous: One respondent is concerned that setting high standards for donors could make it difficult for new donors to contribute to EA, possibly leading to the shrinkage of the community.

  • Community Building and Early Funnel Bottlenecks: One respondent expressed the opinion that non-targeted community building may be overrated and that there may not be much of a bottleneck in the early stages of community funneling except for exceptional cases.

  • Community Building Projects Split: One respondent is, on the margin, against community building projects that are specifically focused on either neartermism or longtermism instead of broader EA.