EA Infrastructure Fund’s Plan to Focus on Principles-First EA

Edit 2024/​01/​02: People who like this new vision may want to consider donating to EAIF!

Now until the end of January is an unusually good time to donate, as it allows you to take advantage of 2:1 Open Phil matching ($2 from them for $1 from you)[1]. My best guess is that (unlike with LTFF), the EAIF match will not be filled by default without significant changes in more donors chipping in. We currently have ~1.5M of the 3.5M matching filled (see live dashboard here)

Summary

  • EA Infrastructure Fund (EAIF)[1] has historically had a somewhat scattershot focus within “EA meta.” This makes it difficult for us to know what to optimize for or for donors to evaluate our performance. More

  • We propose that we switch towards focusing our grantmaking on Principles-First EA.[2] More

  • This includes supporting:

    • research that aids prioritization across different cause areas

    • projects that build communities focused on impartial, scope sensitive and ambitious altruism

    • infrastructure, especially epistemic infrastructure, to support these aims

  • We hope that the tighter focus area will make it easier for donors and community members to evaluate the EA Infrastructure Fund, and decide for themselves whether EAIF is a good fit to donate to or otherwise support.

  • Our tentative plan is to collect feedback from the community, donors, and other stakeholders until the end of this year. Early 2024 will focus on refining our approach and helping ease transition for grantees. We’ll begin piloting our new vision in Q2 2024. More

Note: The document was originally an internal memo written by Caleb Parikh, which Linch Zhang adapted into an EA Forum post. Below, we outline a tentative plan. We are interested in gathering feedback from community members, particularly donors and EAIF grantees, to see how excited they’d be about the new vision.

Introduction and background context

I (Caleb) [3]think the EA Infrastructure Fund needs a more coherent and transparent vision than it is currently operating under.

EA Funds’ EA Infrastructure Fund was started about 7 years ago under CEA. The EA Infrastructure Fund (formerly known as the EA Community Fund or EA Meta Fund) has given out 499 grants worth about 18.9 million dollars since the start of 2020. Throughout its various iterations, the fund has had a large impact on the community and I am proud of a number of the grants we’ve given out. However, the terminal goal of the fund has been somewhat conceptually confused, which likely leads to a focus and allocation of resources often seemed scattered and inconsistent.

For example, EAIF has funded various projects that are associated with meta EA. Sometimes, these are expansive, community-oriented endeavors like local EA groups and podcasts on effective altruism topics. However, we’ve also funded more specialized projects for EA-adjacent communities. The projects include rationality meetups, fundraisers for effective giving in global health, and AI Safety retreats.

Furthermore, in recent years, EAIF also functioned as a catch-all grantmaker for EA or EA-adjacent projects that aren’t clearly under the purview of other funds. As an example, it has backed early-stage global health and development projects.

I think EAIF has historically served a valuable function. However, I currently think it would be better for EAIF to have a narrower focus. As the lead for EA Funds, the bottom line of EAIF has been quite unclear to me, which has made it challenging for me to assess its performance and grantmaking quality. This lack of clarity has also posed challenges for fund managers in evaluating grant proposals, as they frequently face thorny philosophical questions, such as determining the comparative value of a neartermist career versus a longtermist career.

Furthermore, the lack of conceptual clarity makes it difficult for donors to assess our effectiveness or how well we match their donation objectives. This problem is exacerbated by us switching back to a more community-funded model, in contrast to our previous reliance on significant institutional donors like Open Phil[4]. I expect most small and medium-sized individual donors to have less time or resources to carefully evaluate EAIF’s grantmaking quality despite the conceptual confusions. Likewise, grantees, applicants, and potential applicants may be confused due to uncertainty about what the fund is looking for.

Finally, having a narrower purpose clarifies the areas EAIF does not cover, allowing other funders to step in where needed. Internally, having a narrower purpose means our fund managers can specialize further, increasing the efficiency and systematization of grant evaluations.

Here is my proposal for the new EAIF vision, largely focused on principles-first EA (or at least my interpretation of principles-first EA). I believe this gives a clearer bottom line for the fund, whilst complementing the work done by other orgs nicely, and is currently a neglected perspective by other EA funders.

Proposal

The EA Infrastructure Fund will fund and support projects that build and empower the community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view. In general, it will fund a mixture of activities that:

  1. Increase the number of people trying to do an ambitious amount of good using evidence and reason, with a focus on scope sensitivity, impartiality and radical empathy. This is mainly achieved by expanding the Effective Altruism community, though not exclusively.

  2. Help these individuals allocate their resources, or otherwise do good, in the most altruistically impactful way.

Examples of projects under the new EAIF’s purview

  • Research that aids prioritization across different cause areas

    • Initiatives such as 80,000 Hours and Probably Good

    • A retreat for researchers in nascent fields (like ethics concerning digital minds) where the outputs may be helpful in determining whether the EA community should devote more resources towards expanding those fields.

  • Projects that help grow the EA community.

    • Local and university-based EA groups with high epistemic integrity

    • An op-ed discussing the world’s most pressing issues from an Effective Altruism viewpoint.

  • Infrastructure, especially epistemic infrastructure, to support these aims

    • Guesstimate

    • Manifold Markets

Examples of projects that are outside of the updated scope

Here are some examples of “meta” projects that may have been within the current EAIF’s purview, but I think will fall outside of the new scope. (Note that many of them might still otherwise be exciting or high-impact).

  1. An introductory website to AI Safety, such as AIsafety.com.

  2. Travel reimbursements for students to attend an alternative protein conference.

  3. A community-based AI safety or animal welfare group.

  4. A university society or club dedicated to global health and wellbeing.

  5. An organization promoting effective giving, aimed at supporting Global Health and Wellbeing charities.

Why focus on Principles-First EA?

H/​t to Joe Carlsmith for crystallizing some of these takes for me.

  • I[3] think EA is doing something special.

    • It has attracted many people towards problems that I think are very pressing, including many people that I think have a lot of impact potential.

    • Historically, EA has acted as a beacon for thoughtful, sincere, and selfless individuals to connect with each other.

      • I think at some point we shifted towards more of a recruitment-focused approach, rather than nurturing a community ethos. In my view, this shift moved us away from the core of EA that I think of as special and important.

        • Though perhaps a more recruitment-oriented version of EA could be better for the world.

  • I think that fighting for EA right now could make it meaningfully more likely to thrive long term.

    • I suspect that the reputation of EA has (justifiably) taken a hit due to the FTX situation and other negative media attention—but the impact on the brand isn’t as severe as it might seem.

      • I think there are many updates to be made from FTX, but we shouldn’t abandon the brand or movement-building altogether just because of FTX.

    • Many of the organizations that I see as most central to keeping EA afloat might decide to prioritize direct work. I think this is true even if, collectively, they’d endorse more resources going towards Principles-First EA than the current allocation.

  • I think that we could make EA much better than it currently is—particularly on the “beacon for thoughtful, sincere, and selfless” front.

    • I don’t think EA has done much “noticing what is important to work on” recently.

      • Historically, EA has many thoughtful people who have discovered or were early adopters of novel and important ideas.

      • I don’t think many people have recently tried to bring those people together with the express goal of identifying and working on pressing causes.

Potential Metrics

Note that I’m not looking to directly optimize for these metrics. Rather, “If the fund is operating well, I predict we’ll see improvements along these dimensions.”

Below are some potential metrics we could consider:

  • The number of people explicitly using EA principles to guide large decisions in their lives.

  • The number of people that can explain the various cruxes between longtermism and neartermism, and between helping humans and helping non-human animals

  • The number of people working in jobs that generate a substantial amount of altruistic impact, spanning a range of moral views that we find credible.

  • The number of people meaningfully engaging with the EA community

  • The quality of discussions on the EA forum and other EA platforms, specifically focusing on their epistemic rigor, originality, and usefulness for making a positive impact in the world.

  • The caliber of attendees at EA global events—evaluated based on their alignment with EA values and their fit for impactful roles.

I will also be interested in quality-weighting for those metrics, though this is controversial and may be hard to do in a worldview-agnostic manner. (One possibility is some combination of a relatively neutral assessment of competency and dedication)

Potential Alternatives for Donors and Grantees

I (Linch) might add more details to this section pending future comments and suggestions.

Unfortunately, some people and projects who are a good fit for EAIF’s current goals might not be a good fit for the new goals. Likewise, donors may wish to re-evaluate their willingness to contribute to EAIF in light of the new strategy.

For people doing meta-work that is closely associated with a specific cause area, we encourage you to apply for funds that specialize in that cause area (e.g. LTFF for work on longtermism or mitigating global catastrophic risks, Animal Welfare Fund for animals-focused meta projects).

I will also try to keep an updated list of alternative funding options below. Readers are also welcome to suggest other options in the comments.

People may also like observations of the funding landscape of EA and AI Safety by Vilhelm and Jona.

Tentative Timeline

Until EOY 2023:

  • Get feedback from community members and other stakeholders

  • Gauge donor interest and get soft commitments from donors, to understand what scale EAIF should be operating on next year

Q1 2024

  • Scope out vision more and define metrics more clearly

  • Hire for a new fund chair for EAIF (determine part-time or full-time status based on applicant interest and scale expectations)

  • Hire EAIF fund managers and assistant fund managers

  • Phase out current version of EAIF (eg by giving out exit grants)

Q2 2024

  • Onboard EAIF fund chair

  • 3-month trial period for the new vision

Q3 2024 onwards

  • Continue grantmaking under the new vision (if trial period worked out well)

Appendices

(no need to read, but feel free to if you want to)

Examples of projects that I (Caleb) would be excited for this fund to support

  • A program that puts particularly thoughtful researchers who want to investigate speculative but potentially important considerations (like acausal trade and ethics of digital minds) in the same physical space and gives them stipends—ideally with mentorship and potentially an emphasis on collaboration.

  • EA groups at top universities, particularly ones that aren’t just funnelling people into long-termism or AIS.

  • A book or podcast talking about underappreciated moral principles

  • Foundational research into big, if true, areas that aren’t currently receiving much attention (e.g. post-AGI governance, ECL, wild animal suffering, suffering of current AI systems).

  • Research that challenges assumptions or rarely discussed considerations like Growth and the case against randomista development.

Note that I[3] don’t plan on being the chair of this fund indefinitely, and probably won’t try and make these kinds of grants whilst I chair the fund.

Scope Assessment of Hypothetical EAIF Applications

These fictional grants are taken from this post and are all currently in scope.

In scope

  • Continued funding for a well-executed podcast featuring innovative thinking from a range of cause areas in effective altruism ($25,000)

  • A program run by a former career counsellor at an elite college introducing intellectually- and ethically-minded college freshmen to EA and future-oriented thinking ($35,000)

  • A six-month stipend and expenses for a dedicated national coordinator of EA Colombia[5] to aid community expansion and project coordination ($12,000)

  • Expenses for a student magazine covering issues like biosecurity and factory farming for non-EA audiences ($9,000)

  • 12 months’ living stipend, rent, and operational expenses for 2 co-organizers to develop and test out a program for specialised skill development within the Indonesian EA community and delivering high-quality localized content ($35,000)

  • Rerunning a large-scale study on perceptions of the EA brand to see if the results changed post-November 2022 ($11,000)

  • Stipend for 4 full-time equivalent (FTE) employees and operational expenses for an independent research organisation that conducts EA cause prioritisation research and assists a few medium-sized donors ($500,000)

Unclear

  • A nine-month stipend for a community builder to run an EA group for professionals in a US tech hub ($45,000)

    • If this ended up mostly focussed on AI safety, as it’s in a tech hub, it should instead be funded by the LTFF. If it is discussing EA more broadly then the EAIF should fund it.

  • Capital and a part-time stipend for an organiser to obtain rental accommodation for 15 students visiting EA hubs for internships during the summer ($40,000)

    • If this ended up mostly focussed on longtermist causes (which is plausible given the kinds of orgs that offer internships in EA hubs) it instead should be funded by the LTFF.

Out of scope

  • Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank to understand better how the intelligence community monitors various kinds of risk, such as biological threats ($6,000)

  • Stipend and one year of expenses for someone with local experience in high-net-worth fundraising to launch an Effective Giving Singapore[5] website and start fundraising initiatives in Singapore for highly impactful global health charities ($170,000)

  • A 12-month stipend and budget for an EA to conduct programs to increase the positive impact of biomedical engineers and scientists ($75,000)

Key Considerations

I encourage commenters to share their own cruxes as comments.

  1. Is this vision philosophically coherent?

  2. Will this lead to a specific and narrow worldview dominating EAIF?

  3. How viable is the “EA beacon” in light of FTX?

  4. Are donors excited about this vision?

  5. Do others in the EA community think furthering this vision is a priority (relative to progress in core cause areas)?

  1. ^

    The EA Infrastructure Fund is part of EA Funds, which is a fiscally sponsored project of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA Inc. (“EV US”). Donations to EAIF are donations to EV US or EV UK. Effective Ventures Foundation (UK) (EV UK) is a charity in England and Wales (with registered charity number 1149828, registered company number 07962181, and is also a Netherlands registered tax-deductible entity ANBI 825776867). Effective Ventures Foundation USA Inc. (EV US) is a section 501(c)(3) organization in the USA (EIN 47-1988398). Please see important state disclosures here.

  2. ^

    Also known as “EA qua EA” or “community-first EA.” Basically, focusing on this odd community of people who are willing to impartially improve the world as much as possible, without presupposing specific empirical beliefs about the world (like AGI timelines or shrimp sentience).

  3. ^

    In here and the rest of the document, “I”, “me”, “my” etc refers to Caleb Parikh, unless explicitly stated otherwise. In practice, many of the actual words in the post were written by Linch Zhang, who likes the vision and tried to convey it faithfully, but is genuinely uncertain about how it compares to other plausible visions.

  4. ^

    Before our distancing and independence from Open Phil, in 2022, Open Phil accounted for >80% of EAIF’s funding. For comparison, institutional funds has historically accounted for <50% of LTFF’s funding.

  5. ^

    As with the parent post, any reference to proper nouns in hypothetical grants, including country and regional names, should be assumed to be fictional.

Crossposted to LessWrong (0 points, 0 comments)