Discussion: Adding New Funds to EA Funds

In our EA Funds launch post, we noted that:

[I]n the future we hope to encourage new fund managers to create new funds with different focus areas than the current options.

As our three-month trial draws to a close we’re now thinking more seriously about adding new funds to EA Funds. However, there are a number of open questions that would determine how many funds we might add, which funds might be added, and how quickly we’d be able to add new funds. I outline the relevant open questions as I see them below.

CEA plans to discuss adding new funds during our team retreat after EA Global: Boston. The goal of this post is to get feedback on these question from the community to help inform that discussion. Please provide feedback in the comments below. If you’re attending EA Global: Boston you can also grab me for a quick chat there.

Below I present each open question, try to explain the full range of options available, and then outline some of the considerations that I think are relevant in addressing the question. The goal is to remain neutral on the answer while still providing relevant information. The inclusion of an option or a consideration does not necessarily imply endorsement of that option or consideration by me or by others at CEA.

If you think there are open questions to address that I have missed, please feel free to suggest them in the comments.

Open Questions

Question 1: Should we add new funds? If so, when?

The first question is whether we should add new funds at all and, if so, on what timeline should we add them? Part of this answer depends on how much money is moving through EA Funds. For reference, EA Funds has processed $775,000 so far with $31,000 in monthly recurring donations. We expect the pace of growth in the near future to be slower than it was in the first three months as the initial buzz around EA Funds dies down.

Potential options

Don’t add new funds

The first option is that we shouldn’t add new funds at all. For example, we might want to tweak the existing funds by selecting new fund managers or by having multiple people manage certain funds, but we might not want to expand past a small number of funds that represent the most widely-supported causes.

Add new funds, but later

We might want to add new funds, but only after EA Funds has a longer track record or has reached certain milestones. For example, we might only want to add funds after a year, or once we’ve moved a certain amount of money, or once we’ve reached a certain amount of money in monthly recurring donations.

Add new funds now

Finally, we might opt to add new funds very soon.

Considerations

Future growth of EA Funds

Adding new funds depends, at least in part, on how much money might be available to support the funds which depend in turn on EA Fund’s future growth prospects.

This is hard to determine, but here are some guesses. First, I don’t expect us to raise as much money over the next three months as we did over the initial three months. Much of the money we do raise will be driven by the $31,000 in monthly recurring donations that have already been set. However, it is unlikely that donors with recurring donations will change their allocation to include new funds. This means that it may be relatively difficult to move significant amounts of money through new funds in the short term.

On the other hand, the user base of EA Funds is still relatively small (around 665 unique donors), so there may be significant low-hanging fruit in getting people already involved in EA to consider using the platform. Additionally, adding a fund that meets an as-yet unmet demand could cause additional money to flow through the platform in a way that doesn’t cannibalize existing funds.

Viewpoint diversity

All of our current funds are run by GiveWell/​Open Phil staff members. As we’ve stated in the past, we aim to have 50% or less of the program officers work at GiveWell/​Open Phil. Adding more funds seems like the most plausible way to achieve this goal.

Reputation

Adding new funds that are significantly worse than the existing options might harm the reputation of EA Funds, CEA, and EA in general. Conversely, adding high-quality funds in new areas may improve the reputation of EA by further showcasing the ability of the EA community to find interesting ways of improving the world.

Question 2: What kinds of funds should we add?

Our existing funds each focus on a single broad cause area that EAs have historically supported. The existing funds were designed to give fund managers relatively wide latitude to decide what use of funds is best while also making it clear to donors what the funds might donate to.

One question for the future is whether we should expand EA Funds by adding new funds in new cause areas or whether we should expand by adding new funds built around themes other than cause areas.

Potential options

Below are some options for the kind of funds we might add. Keep in mind that these options are not mutually exclusive, so we could include multiple plausible options.

New funds in new cause areas

We could simply add new funds in new cause areas. These would operate similarly to the existing funds.

New funds in existing cause areas

We could add funds in existing cause areas that have the same scope as the current funds. For example, we could add a second fund in global health and development which has the same scope as the fund managed by Elie, but which is managed by someone else.

Fund manager’s discretion

We could add funds that give the fund manager wide latitude to recommend a grant to whatever they think is best regardless of cause area.

Different approaches to existing causes

We could add funds that take some different approach to existing cause areas. For example, we could add a fund in global health and development that focuses on high-risk, high-reward projects (e.g. startups, funding evaluations rather than direct interventions), or we could add a long-term future fund that focused on areas other than AI-Safety.

Funds based on particular tactics

We could add funds which are focused on particular tactics instead of cause areas. For example, we could add a fund which donates only to startups or which funds research projects. These funds could operate across a variety of cause areas.

Funds based on normative disagreements

We could add funds which are based on specific normative disagreements. For example, we could have a fund which focuses predominantly on improving (and not necessarily saving) lives or a fund which focuses on reducing suffering.

Considerations

The chicken-and-egg problem for new causes

For a fund in a new cause area to succeed it needs both money and high-quality projects to support with that money. This presents different problems for EA Funds than those faced by large funders with an endowment like Open Phil. In Open Phil’s case, since it already has the money, it can declare an interest in funding some new area and then use the promise of potential funding to cause people to start new projects. If no projects show up, it can simply redirect the money to other projects.

However, in EA Funds, the ability of a fund to attract money is partially dependent on the existence of promising projects to fund (since a fund without plausible grantees will have a hard time getting donations). This means that EA Funds may find it difficult to catalyze activity in completely novel areas.

Clarity

It should be relatively easy for donors to figure out what they’re supporting if they donate to a fund. For donors willing to research, the fund page should be sufficient to help them understand each fund.

However, not all donors will carefully read the fund pages and many donors will choose what fund pages to review based on the name and perhaps a short description of each fund. While we hope donors will look at the details of each fund, realistically it may be the case that the name of each fund alone will have a disproportionate effect on whether people choose to support it.

Fund names should satisfy two goals:

  1. The name should make it clear what the fund is likely to support.

  2. The name should make it clear how the fund is different from the other available funds.

However, some options for adding new funds present greater clarity challenges than others. For example, funds in the same cause area as existing funds will present a particular challenge in choosing names that make it easy to understand how the funds differ. Similarly, funds that operate at the fund manager’s discretion will be difficult to name in a way that makes it clear what the fund is likely to support.

Expanding EA’s intellectual horizons

Adding funds in areas outside of global health, animal welfare, long-term future, and EA community would help expand the intellectual horizons of EAs and help us find new promising cause areas.

Question 3: How should we vet new funds?

Our current funds represent problem areas that we think are especially promising, have wide community support, and are run by fund managers that we think have strong knowledge and connections in the fund area. We could attempt to ensure that any new funds adhere to similar standards or we could substantially open the platform up and allow anyone (or nearly anyone) to create a fund of their own.

Below I try to outline a continuum of plausible options for the degree to which we ought to vet new funds. I then outline some considerations that are relevant for deciding where we ought to fall along this continuum.

Potential options

No vetting

On one extreme end of the continuum, we could let anyone create a fund which they manage however they want and which anyone can donate to. To add slightly more quality controls we could require certain kinds of reporting and require some standard set of information for the fund page of each fund.

Democratic vetting

We could let anyone create a fund, but only keep funds that receive a certain amount of support from the community (e.g. donations or “votes” of some kind). We could instead let anyone propose a fund, but only accept some small number of funds as determined by community support (e.g. pledges to donate).

Plausibility vetting

We could let anyone propose a fund, but then have CEA (or some set of trusted researchers) review the funds and reject any funds which we think are not plausibly a good candidate.

The precise definition of “plausibility” in this context is up for grabs, but the goal would be to reject only the funds and fund managers which seem like especially poor options. The process could use some method of democratic vetting to further narrow down the field from among the plausible options.

“Reasonable-person” vetting

Using the process described above, we could apply a more strict “reasonable person” standard. The goal would be to only accept funds which a reasonable person might think are better than some benchmark. For example, we could only allow funds which a reasonable person might think are better than AMF or better than the existing funds. Anyone could propose a fund and then this standard would be applied or proposing a fund could be an invite-only process.

“Better than” vetting

Finally, we could only accept funds that CEA (or some set of trusted researchers) think are better than the existing options for some criterion of betterness. This is different from the reasonable person standard because it requires that we think the fund is actually better than the existing options, not that we could see how someone might think that the fund is better.

Hybrid options

We could also combine multiple approaches to form hybrid options. Some rough ideas for how we might do this are below:

  • Start closed and open up over time

    • We could start by vetting funds very close for the first few rounds of adding new funds and we could decrease the vetting requirements over time.

  • Low vetting plus nudges

    • We could provide very little vetting for creating a fund, but nudge users towards the funds that we think are most promising. For example, the default [allocation page](https://​​app.effectivealtruism.org/​​donations/​​new) could only include highly promising funds and less promising options could be made less immediately obvious.

Considerations

Below are some considerations that might factor into the decision of how closely to vet new funds. These are presented in no particular order.

Inclusion in EA Funds as a nudge

User behavior so far suggests that many people choose to split their donation among several funds instead of donating all of their money to a single fund. This suggests that donors see inclusion in EA Funds as a sign of quality and that a fund’s inclusion nudges people to donate to causes they might not have given to otherwise. This was also born out in some Skype conversations we had with early users.

This increases the potential for new funds to cause harm by attracting money that might have been better spent elsewhere.

Administrative costs

Each fund adds some small, but nontrivial administrative cost to CEA.

For each fund, CEA needs to communicate with the fund manager regularly about the amount of money available, whether they have new grant recommendations, and about posting updates to the website. We also incur administrative costs every time a grant is made as we need the trustees to approve the grant and we need to work with the charity to get them the money. We could probably develop systems to decrease administrative costs if the scale of the project required this, but we likely wouldn’t be able to do this in the short term.

Reputation

Lower-quality funds might harm the reputation of EA Funds, CEA, and EA in general.

Recruiting high-quality fund managers

Low-quality funds might make it harder to acquire (and retain) high-quality fund managers as being associated with the project becomes less prestigious.

Researcher recruitment

One source of value from EA Funds is that it might help incentivize talented researchers to do high-quality work on where people ought to donate. Lower barriers to entry in setting up a fund might increase the pipeline of researcher talent that EA Funds helps create.

Funding externally controversial projects

One affordance we’d like for EA Funds to have is funding high impact, but externally controversial projects.

Plausibly, the more funds we have, and the more EA Funds is an open platform, the less the actions of a single fund will negatively affect the platform as a whole. So, we might have more affordance to fund controversial projects by adding more funds.

New funds and acquiring new users

It seems plausible that more funds would make it easier to attract more users for two reasons. First, when someone sets up a fund they will likely reach out to their network to get people to donate which may help us acquire users. Second, the more variety we offer the more likely it is that donors find funds that they strongly resonate with.

The marketplace of ideas

Lower barriers to entry would promote a more open and thriving marketplace of ideas about where people should donate.

Expertise

EA Funds was conceived as a way of making individual’s donation decisions easier, by allowing them to draw on the expertise of people or groups who have greater subject-matter expertise and are more up-to-date with the latest research on their Fund’s topic, current funding opportunities in the space, and organizational funding constraints. There is a tradeoff between creating fewer new funds that are genuinely expert-led, and a greater number of funds where the average level of expertise is lower.

Conclusion

This post has attempted to describe some of the open questions on EA Funds and the relevant considerations as a way to solicit feedback and new ideas from the EA community. I look forward to a discussion in the comments here and in person for anyone at EA Global: Boston this weekend.

The next steps for this process are for me to review comments to this post and to discuss the topic with the rest of the CEA team. Afterward, I plan to write a follow-up post that outlines either the option we selected and why or the options we’re currently deciding between. If you have thoughts that you’d prefer not to share here, feel free to email me at kerry@effectivealtruism.org.

Please note that due to EA Global: Boston, CEA staff might be slower to respond to comments than usual.