Should there be an EA crowdfunding platform?

The Problem

As noted in two recent discussions, there may be many promising EA projects that are unable to secure sufficient funding. The cause seems to be that there are few funding sources for new projects: the Open Philanthropy Project focuses on larger grantees, EA Grants and EA Funds appear staff limited, peripheral EAs prefer to fund established organizations, and core EAs may have difficulty evaluating the competence of a person, which is an important factor for early stage projects. In this post, I explore one possible solution: a crowdfunding platform for projects that are endorsed by trusted EAs. (Note that a crowdfunding platform was proposed by Linda Linsefors in response to a comment by David Moss on this Facebook post.)

How It Could Work

Below, I attempt to work out the details of how a crowdfunding platform could work. Of course, there are many different ways to set one up, which means you can reject this specific proposal without rejecting the general idea.

Note: I am not an employee of the Centre for Effective Altruism. What appears below is a description of an idea, not an announcement of something that CEA plans to implement.

The Centre for Effective Altruism (CEA) invites people who have been involved in the EA community at a deep level for several years to serve as evaluators for one or more cause areas. (Nobody can apply to be an evaluator, which means there are no explicit rejections.)

Evaluators who accept the invitation for a cause area must agree to rate all submissions within that cause area (to avoid selection bias) and to keep their ratings confidential (to encourage honesty).

The goal is to have a large number of evaluators for each cause area.

Anyone with one year of substantial involvement in EA can submit a project proposal to CEA.

Proposals must include a description of the idea, an estimate of the probability of success, the benefits if the project is successful, any possible harms and the probability of each possible harm, and the amount of funding needed.

CEA anonymizes the proposals and sends them to all evaluators for the relevant cause area.

Each of those evaluators rates the idea from −10 to 10. (The lower end of the scale is −10 to allow evaluators to indicate that they think the project has a negative expected value.)

Evaluators can include feedback encouraging/​discouraging the proposer from pursuing the project further and/​or providing suggestions for improvement.

Evaluators can also endorse a proposal (if they think it’s a good idea) or endorse a person (if they believe the person is highly competent).

The proposal is considered approved if it meets the following criteria:

a. the average rating is above x;

b. there are n people who endorse the idea; and

c. there are m people who endorse the person.

(The process for determining whether a proposal is approved is automated and CEA never sees the rating of any specific evaluator or the identity of the evaluator(s) endorsing a person.)

Once a proposal is approved, all evaluators in the relevant cause area estimate the probability the project will succeed if undertaken. Those who endorsed the idea provide a brief statement explaining why they did so. (There is no statement for endorsing a person.)

CEA can veto an approved proposal. (This helps prevent unilateralist’s curse and helps manage reputational risk.) However, this power is exercised sparingly since vetoing all proposals except those supported by CEA would result in this platform becoming EA Grants.*

Proposers are informed of whether their proposal was accepted (meaning it was approved without being vetoed) or rejected (meaning it failed to secure approval or was vetoed) as well as any feedback from the evaluators. Proposers do not see the average rating or the number or identity of people who endorsed the proposer. Additionally, unsuccessful proposers do not see the number or identity of people who endorsed the idea or the reason that the proposal was rejected (i.e. which of the approval criteria it failed to satisfy and whether it was vetoed). CEA periodically releases aggregate statistics.

A rejected proposal can be resubmitted if CEA determines that it’s been materially improved.

Those proposals that are accepted appear publicly on a platform alongside the name and statement of evaluators who endorsed the proposal, the average estimated probability of success (endorsers only), the average estimated probability of success (all evaluators), and CEA’s estimate of how much money it would take to fully fund the project. The names of evaluators who endorsed the proposer do not appear publicly and are not disclosed to anyone.

The proposer can either choose to only allow unconditional donations or to also allow conditional donations (money that will be returned unless the amount needed to fully fund the project as estimated by CEA is raised within a certain period of time).

Proposers can return all donations if they receive too little to go forward with the project.

Proposers who take the money must post an update every y months for a period of z years.

*Alternatively, this system could be used to evaluate ideas for EA Grants (with projects that are approved but not funded or not fully funded listed on the platform).

Note: I am not an employee of the Centre for Effective Altruism. What appears above is a description of an idea, not an announcement of something that CEA plans to implement.

Benefits

1. It could increase the number of worthwhile projects funded since:

a. some projects that are currently disfavored by a single person who controls a key funding source would be rated well by a large group of evaluators, which could influence that person;

b. people may have more confidence in projects that are currently recommended by others if they know that those recommendations represent the overall views of the community*;

c. more people would have access to the donation recommendations of those who currently only recommend worthwhile projects to others privately; and

d. there would be a single platform where busy donors could easily find most project ideas alongside relevant information.

*Think of this as allowing people to get additional draws from the distribution of how members of the EA community view the project, which allows them to determine whether those recommending the project are at the median of the distribution of community views or whether they are at the right tail (and also whether and to what extent the left tail goes into the negative).

2. Through the process of launching new projects, EAs would build valuable skills (which is possible even when projects ultimately fail).

3. People may be less likely to unilaterally start a bad project if there is a formal mechanism for an idea to receive a firm rejection from the community. (The feedback could also result in improvement to good ideas but I think that’s currently already possible.)

4. Allowing people to donate on the condition that others donate could help avoid the collective action problem that arises when it is only worthwhile to fund a project if enough other money is going towards it (and where your money alone could fund a smaller scale version of the project, meaning that the proposer would not necessarily return it if you donated it unconditionally).

5. It would allow the EA community to learn valuable information such as:

a. the number and quality of project ideas in the community;

b. the probability of a project succeeding (by type, by cause);

c. the average accuracy of predictions (and which people are above average); and

d. why projects fail and what can be done to avoid failure.

Potential Costs

1. There are various costs associated with projects being funded that otherwise would not have been funded including:

a. the opportunity cost of the funding, which could have gone elsewhere;*

b. the opportunity cost of the talent of people working on the project, which could have been applied within an existing EA organization;

c. the risk that the project causes harm; and

d. the risk of reputational harm from the project failing or causing harm.

If people tend to be overly optimistic about proposals (and thus overestimate expected benefits), then they will sometimes fund projects despite the above costs being greater than the expected benefits.

*The opportunity cost might be especially high if the money would have gone to EA Funds, the fund manager would also have given it to risky projects with high expected value, and the fund managers are much better at judging which ones are likely to succeed.

2. It could increase the reputational cost for harmful projects (including ones that would have occurred absent the platform) by making it harder to distance the EA community from such projects.

3. Scammers may join the EA community and seek project funding in bad faith. Not only does this cause all of the problems identified above (diverted money, lost time, reputational harm etc.), it could decrease trust within the EA community.

4. There is an opportunity cost to the time spent creating this system, the time spent managing it, the time spent writing proposals, and the time time spent evaluating them. Given the availability of alternative ways of announcing projects* and endorsing them, there may be relatively few people who would use this system to propose ideas. If so, then the opportunity cost might be greater than the benefits.

*These may be bad examples of using a public announcement to generate initial funding since many of these seem to have gathered sufficient funding to launch before being announced.

5. The choice of evaluators could cause hurt feelings for those who are excluded. The rejection of a proposal might cause hurt feelings for the proposer and might even cause them to underestimate their abilities in the future.

6. The platform could become the default path (to the point that raising money for new projects through other channels is disfavored), which could

a. entrench the status quo in terms of cause areas and strategies;

b. make it harder for low probability, high magnitude projects to get funding (if evaluators only endorse projects they think are likely to succeed);

c. make it harder to quickly launch a project; and

d. make it harder to launch projects that require some secrecy.

Ultimately, I’m unsure as to whether this would be a good idea. My primary motivation in posting this is to generate more discussion on this topic.