Should there be an EA crowdfunding platform?
The Problem
As noted in two recent discussions, there may be many promising EA projects that are unable to secure sufficient funding. The cause seems to be that there are few funding sources for new projects: the Open Philanthropy Project focuses on larger grantees, EA Grants and EA Funds appear staff limited, peripheral EAs prefer to fund established organizations, and core EAs may have difficulty evaluating the competence of a person, which is an important factor for early stage projects. In this post, I explore one possible solution: a crowdfunding platform for projects that are endorsed by trusted EAs. (Note that a crowdfunding platform was proposed by Linda Linsefors in response to a comment by David Moss on this Facebook post.)
How It Could Work
Below, I attempt to work out the details of how a crowdfunding platform could work. Of course, there are many different ways to set one up, which means you can reject this specific proposal without rejecting the general idea.
Note: I am not an employee of the Centre for Effective Altruism. What appears below is a description of an idea, not an announcement of something that CEA plans to implement.
The Centre for Effective Altruism (CEA) invites people who have been involved in the EA community at a deep level for several years to serve as evaluators for one or more cause areas. (Nobody can apply to be an evaluator, which means there are no explicit rejections.)
Evaluators who accept the invitation for a cause area must agree to rate all submissions within that cause area (to avoid selection bias) and to keep their ratings confidential (to encourage honesty).
The goal is to have a large number of evaluators for each cause area.
Anyone with one year of substantial involvement in EA can submit a project proposal to CEA.
Proposals must include a description of the idea, an estimate of the probability of success, the benefits if the project is successful, any possible harms and the probability of each possible harm, and the amount of funding needed.
CEA anonymizes the proposals and sends them to all evaluators for the relevant cause area.
Each of those evaluators rates the idea from −10 to 10. (The lower end of the scale is −10 to allow evaluators to indicate that they think the project has a negative expected value.)
Evaluators can include feedback encouraging/discouraging the proposer from pursuing the project further and/or providing suggestions for improvement.
Evaluators can also endorse a proposal (if they think it’s a good idea) or endorse a person (if they believe the person is highly competent).
The proposal is considered approved if it meets the following criteria:
a. the average rating is above x;
b. there are n people who endorse the idea; and
c. there are m people who endorse the person.
(The process for determining whether a proposal is approved is automated and CEA never sees the rating of any specific evaluator or the identity of the evaluator(s) endorsing a person.)
Once a proposal is approved, all evaluators in the relevant cause area estimate the probability the project will succeed if undertaken. Those who endorsed the idea provide a brief statement explaining why they did so. (There is no statement for endorsing a person.)
CEA can veto an approved proposal. (This helps prevent unilateralist’s curse and helps manage reputational risk.) However, this power is exercised sparingly since vetoing all proposals except those supported by CEA would result in this platform becoming EA Grants.*
Proposers are informed of whether their proposal was accepted (meaning it was approved without being vetoed) or rejected (meaning it failed to secure approval or was vetoed) as well as any feedback from the evaluators. Proposers do not see the average rating or the number or identity of people who endorsed the proposer. Additionally, unsuccessful proposers do not see the number or identity of people who endorsed the idea or the reason that the proposal was rejected (i.e. which of the approval criteria it failed to satisfy and whether it was vetoed). CEA periodically releases aggregate statistics.
A rejected proposal can be resubmitted if CEA determines that it’s been materially improved.
Those proposals that are accepted appear publicly on a platform alongside the name and statement of evaluators who endorsed the proposal, the average estimated probability of success (endorsers only), the average estimated probability of success (all evaluators), and CEA’s estimate of how much money it would take to fully fund the project. The names of evaluators who endorsed the proposer do not appear publicly and are not disclosed to anyone.
The proposer can either choose to only allow unconditional donations or to also allow conditional donations (money that will be returned unless the amount needed to fully fund the project as estimated by CEA is raised within a certain period of time).
Proposers can return all donations if they receive too little to go forward with the project.
Proposers who take the money must post an update every y months for a period of z years.
*Alternatively, this system could be used to evaluate ideas for EA Grants (with projects that are approved but not funded or not fully funded listed on the platform).
Note: I am not an employee of the Centre for Effective Altruism. What appears above is a description of an idea, not an announcement of something that CEA plans to implement.
Benefits
1. It could increase the number of worthwhile projects funded since:
a. some projects that are currently disfavored by a single person who controls a key funding source would be rated well by a large group of evaluators, which could influence that person;
b. people may have more confidence in projects that are currently recommended by others if they know that those recommendations represent the overall views of the community*;
c. more people would have access to the donation recommendations of those who currently only recommend worthwhile projects to others privately; and
d. there would be a single platform where busy donors could easily find most project ideas alongside relevant information.
*Think of this as allowing people to get additional draws from the distribution of how members of the EA community view the project, which allows them to determine whether those recommending the project are at the median of the distribution of community views or whether they are at the right tail (and also whether and to what extent the left tail goes into the negative).
2. Through the process of launching new projects, EAs would build valuable skills (which is possible even when projects ultimately fail).
3. People may be less likely to unilaterally start a bad project if there is a formal mechanism for an idea to receive a firm rejection from the community. (The feedback could also result in improvement to good ideas but I think that’s currently already possible.)
4. Allowing people to donate on the condition that others donate could help avoid the collective action problem that arises when it is only worthwhile to fund a project if enough other money is going towards it (and where your money alone could fund a smaller scale version of the project, meaning that the proposer would not necessarily return it if you donated it unconditionally).
5. It would allow the EA community to learn valuable information such as:
a. the number and quality of project ideas in the community;
b. the probability of a project succeeding (by type, by cause);
c. the average accuracy of predictions (and which people are above average); and
d. why projects fail and what can be done to avoid failure.
Potential Costs
1. There are various costs associated with projects being funded that otherwise would not have been funded including:
a. the opportunity cost of the funding, which could have gone elsewhere;*
b. the opportunity cost of the talent of people working on the project, which could have been applied within an existing EA organization;
c. the risk that the project causes harm; and
d. the risk of reputational harm from the project failing or causing harm.
If people tend to be overly optimistic about proposals (and thus overestimate expected benefits), then they will sometimes fund projects despite the above costs being greater than the expected benefits.
*The opportunity cost might be especially high if the money would have gone to EA Funds, the fund manager would also have given it to risky projects with high expected value, and the fund managers are much better at judging which ones are likely to succeed.
2. It could increase the reputational cost for harmful projects (including ones that would have occurred absent the platform) by making it harder to distance the EA community from such projects.
3. Scammers may join the EA community and seek project funding in bad faith. Not only does this cause all of the problems identified above (diverted money, lost time, reputational harm etc.), it could decrease trust within the EA community.
4. There is an opportunity cost to the time spent creating this system, the time spent managing it, the time spent writing proposals, and the time time spent evaluating them. Given the availability of alternative ways of announcing projects* and endorsing them, there may be relatively few people who would use this system to propose ideas. If so, then the opportunity cost might be greater than the benefits.
*These may be bad examples of using a public announcement to generate initial funding since many of these seem to have gathered sufficient funding to launch before being announced.
5. The choice of evaluators could cause hurt feelings for those who are excluded. The rejection of a proposal might cause hurt feelings for the proposer and might even cause them to underestimate their abilities in the future.
6. The platform could become the default path (to the point that raising money for new projects through other channels is disfavored), which could
a. entrench the status quo in terms of cause areas and strategies;
b. make it harder for low probability, high magnitude projects to get funding (if evaluators only endorse projects they think are likely to succeed);
c. make it harder to quickly launch a project; and
d. make it harder to launch projects that require some secrecy.
Ultimately, I’m unsure as to whether this would be a good idea. My primary motivation in posting this is to generate more discussion on this topic.
- Red Teaming CEA’s Community Building Work by 1 Sep 2022 14:42 UTC; 296 points) (
- 10 Mar 2019 21:15 UTC; 26 points) 's comment on EA is vetting-constrained by (
- Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations by 10 Jul 2018 6:12 UTC; 23 points) (
- 13 Jul 2018 8:52 UTC; 8 points) 's comment on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations by (
I like CEA’s work and people a lot, but I envision a world where they’re not the only group that is able to and trusted to lead community projects.
I like a lot of the directions here. My main concern is that the current implementation details here seem like a lot of work, when it seems like Grant Evaluations is already fairly bandwidth constrained.
Some alternate that I think might make a middle ground between “everyone pitches ideas randomly to the EA forum / kickstarter etc” and the “highly structured vetting process” described here:
Right now, there are several EA grantmaking bodies (CEA, BERI, OpenPhil, EA Funds, etc). My impression is there is some duplication of labor in setting up each grant funnel, and duplication of effort for a given project to submit multiple grants.
Some of those orgs actually have different requirements for who they donate to, so it makes sense for them to have different processes
But, I’d expect most of the core process to be pretty similar.
So, proposal: create a common application process which includes whatever submission criteria are shared between grantmakers, with whatever additional details are required for specific orgs. This doesn’t create any additional obligations on people’s time, just streamlines the work that’s already being done.
You could potentially also share the application publicly.
There might be additional details to work out to prevent information cascades, and to optimize the epistemics of the system.
I’m impressed and pleased that you gave credit to the Linda for the original idea. Well done for perpetuating good social norms.
I’m pleased and impressed you thanked someone for perpetuating good social norms, whichh I think help perpetuating social norms, and have therefore upvoted your comment (#meta).
I’m pleasantly impressed you perpetuated a good social norm by thanking someone for perpetuating a good social norm. Well done and thank you.
Thanks for the even-handed explication of an interesting idea.
I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.
You note this set of challenges:
Open Phil targets larger grantees
EA funds/grants have limited evaluation capacity
Peripheral EAs tend to channel funding to more central groups
Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.
The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.
I’m less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.
FWIW (and non-resiliently), I don’t look around and see lots of promising but funding starved projects. More relevantly, I don’t review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.
If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn’t seem to address #3 or #4.
If most of the money (even from the community) ends up going through the ‘core’ funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.
More importantly, if funders generally want to ‘find good people’, the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.
Per Michael, I’m not sure what this idea has over (say) posting a ‘pitch’ on this forum, doing a kickstarter, etc.
Edit: I heard a round of EA Grants applications had opened for this year, but that appears not to currently be the case according to the EA Grants website. I was mistaken. I did hear more EA Grants will be from community members, but not directly from anyone at the CEA, and I assume applications will open at some point, but there isn’t anywhere the CEA has said when.
It should be noted the EA Grants and the EA Funds are different accounts with different issues. Last year the EA Grants were limited by staff time, but I don’t recall anyone directly saying that was the case with the EA Funds. There is another round of EA Grants this year, so no data has come out about that. I expect the CEA is putting more staff time on it to solve the most obvious flaw with the EA Grants last year.
Each of the EA Funds have been performing separately. Last year when there were infrequent updates about the EA Funds it turned out the CEA was experiencing technical delays in implementing the EA Funds website. Since then, while it’s charitably assumed (as I think is fair) each of the fund managers might be too busy with their day jobs at the Open Philanthropy Project to afford as much attention to fund management, neither the CEA nor Open Phil has confirmed such speculation. The Funds also vary in their performance. Lewis Bollard has continually made many smaller grants to several smaller projects from the Animal Welfare Fund, contrasted with Neck Beckstead who has made only one grant from each of the two funds he manages, the Far Future Fund and the EA Community Fund. I contacted the CEA and let me know they intend to release updates on the Far Future Fund and EA Community Fund (which I assume will include disclosures of grants they’ve been tabling the last few months) by July.
One problem is smaller organizations with smaller, less experienced teams is they don’t know how well how to independently and effectively pitch or raise funds for their project, even when their good people with good ideas. Compounding this is a sense of dejection by nascent community projects once they’ve been rejected by the big funders to receive grants, especially otherwise qualified EA community members who don’t know how to navigate the non-profit sector. This is feedback I’ve gotten from community members who know of projects which didn’t get off the ground, and that they faltered quietly might be why they go unnoticed. That stated, I don’t think there is a ton of promising but funding-starved projects around.
On the flip side, I’ve heard some community members say they’re overlooked by donors who are earning to give after they’ve been overlooked by, e.g,. the EA Grants, apparently based on the reasoning since as individual donors they don’t have the bandwidth to evaluate projects, they defer to the apparently expert judgement of the CEA, and since the CEA didn’t fund the project, individual would-be donors conclude a project isn’t fit to receive funding from them either. This creates a ludicrous Catch-22 in which projects won’t get funding from smaller donors until they have authentic evidence of the quality of their project in the form of donations from big donors, which if the projects got they wouldn’t need to approach the smaller donors in the first place. This isn’t tricky epistemology or the CEA even unwittingly creating perverse incentives. Given the EA Grants said they didn’t have the bandwidth to evaluate a lot of potentially valuable projects, for other donors to base not donating to small projects based on them not receiving EA Grants is unsound. It’s just lazy reasoning because smaller donors don’t have the bandwidth to properly evaluate projects either.
Ultimately I think we shouldn’t hold single funders like CEA and Open Phil primarily accountable for this state of affairs, and the community needs to independently organize to connect funding with promising projects better. I think this is a problem in a demand of a solution, but I think something like a guide on how to post pitches or successfully crowd-fund a project would work better than creating a brand new EA crowdfunding platform. Joey Savoie recently wrote a post about how to write posts on the EA Forum to get new causes in EA, as a long-time community members who himself has lots of experience writing similar pitches.
Unfortunately advocating for core funding groups to change their strategy has practical costs which apparently so high appeals like this on the EA Forum feel futile. Direct advocacy to change strategy is too simplistic, and long essays on the EA Forum which ground the epistemological differences of individual effective altruists which diverge from the CEA or Open Phil receive little to no feedback. I think from the inside these organizations focus narrowly on maximizing goal satisfaction they don’t have the time to alter their approach in light of critical feedback from the community, and all the while they feel it’s important to carry on with the very same approaches others in the community are unhappy with. So while I think in this instance a crowdfunding platform is not the right solution, advocating or changing to existing funds seems noncompetitive as well, and designing other parallel routes for funding is something I’d encourage effective altruists to do.
I haven’t seen the launch of 2018 EA grants—could you link to it?
I heard a round of EA Grants applications had opened for this year, but that appears not to currently be the case according to the EA Grants website. I was mistaken. I did hear more EA Grants will be from community members, but not directly from anyone at the CEA, and I assume applications will open at some point, but there isn’t anywhere the CEA has said when.
Some way of distributing money to risky ventures, including fundraising, in global poverty and animal welfare should probably exist.
I think it’s pretty reasonable if CEA doesn’t want to do this because (a) they take a longtermist view and (b) they have limited staff capacity so aren’t willing to divert many resources from (a) to anything else. In fact, given CEA’s stated views it would be a bit strange if they acted otherwise. I know less about Nick, but I’m guessing the story there is similar.
https://www.centreforeffectivealtruism.org/ceas-current-thinking/
I have a limited sense for what to do about this problem, and I don’t know if the solution in the OP is actually a good idea, but recognising the disconnect between what people want and what we have is a start.
I may write more about this in the near future.
I should have been clearer in my classification of donors. Other than institutional sources (Open Phil, EA Grants, EA Funds), I see three primary categories:
EAs who are only willing to give to charities recommended by GiveWell or ACE [what I meant when I said peripheral EAs]
EAs who are willing to give to other organizations where the impact is less concrete but who do not know enough to know which project ideas are good [there may be many earning to give people in this category]
EAs who are willing to give to other organizations where the impact is less concrete and do know enough to know which project ideas are good [this is the category from which evaluators would be drawn]
My concern is that people in category 2 have to rely on the choices of institutional donors to guide them. I want people in category 2 to know about projects that are viewed highly by people in category 3 but rejected by institutional donors.
Under the proposed system, an evaluator can endorse a project idea and/or the person. In order for a proposal to appear on the platform, there would have to be at least n idea endorsements and m personal endorsements. Thus, potential donors would know for all proposals that there are at least m core EAs who think the person is sufficiently competent.
I’d be curious to see the reject list for EA Grants.
I think EA Grants is a great idea for essentially crowdsourcing projects, but it would be nice to have more transparency around how the funding decisions are made, as well as maybe the opportunity for people with different approaches to see and fund rejected grants.
well said!
I think this idea is interesting but I’m unconvinced of the form you’ve chosen. As I’ve understood it seems to involve quite a lot of vetting and EA time before projects reach it to stage where they can ask people for funding. What’s your objection to having an EA equivalent of GoFundMe/Kickstarter where people can just uploads their projects and then ask for funding. I imagine this could also work on the system that projects are time-limited and if they don’t receive the funding they seek all the money gets returned to potential donors.
I think you mean “unconvinced”?
thanks. edited.
The proposed system has two vetting steps: approval by the evaluators and the CEA veto.
The main reason for the CEA veto is to prevent unilateralist’s curse and reputational harm.
The main reason for the approval process is to give potential donors more information. If this was the only reason, then it would make sense to make this step voluntary. But this step also helps CEA decide whether to veto (for example by seeing if there are a few very negative ratings), which is why it’s mandatory in the proposed system.
I agree with you that there’s a large opportunity cost to the EA time that would be spent, which is part of why I’m unsure as to whether the proposed system would be a good idea.
Do you think that an unvetted/community-vetted crowdfunding platform would be worse for reputation risks than the EA Forum? (I think the forum is a good comparison because it is public, but most often visited by people quite involved in EA.)
I agree that most people who would stop pursuing their project if they receive negative feedback via the EA Forum (with upvotes being an indicator of the level of community agreement), but people on the EA Forum may understate how negatively they view the project (for reasons of politeness). And even the mildly negative feedback may cause significant embarrassment for the person (which could deter people from asking for money publicly).
The platform would allow candid rejection of a bad idea without the embarrassment. It would also make it more likely that a good idea that starts from a bad idea will be funded. On a public platform, people with limited time may be inclined to dismiss a greatly improved version of a previously rejected idea. By contrast, if CEA allows for a resubmission on the grounds of significant improvement, the evaluators would know to give the new proposal serious consideration.
Perhaps a trial version could be created which is not for money, but just for kudos?
EA-GoKudoMe ?!
Two links on how to create a crowdfunding platform:
www.thrinacia.com/blog/post/6-easy-steps-to-create-a-crowdfunding-website
https://dev.to/nitlogan/how-to-create-your-own-crowdfunding-platform
Someone just try and build something.
I do think this is a promising idea, but coordination-technology is actually an area where I think it’s pretty important to get a bunch of nuances right, where just building a thing is a) unlikely to work, b) causes harm to future attempts to build the thing.
You don’t just need to build tech, you need to get lots of people on board with it at once. And every instance of getting everyone on board with a thing has a large cost, and every failed instance of that makes people less willing to try out the next thing.
Agree. It’s a good enough idea that I’d like to see a first draft.
I don’t think of having a (very) limited pool of funders who judge your project as such a negative thing. As it’s been pointed out before, evaluating projects is very time intensive.
You’re also implicitly assuming that there’s little information in the rejection of funders. I think if you have been rejected by 3+ funders, where you hopefully got a good sense for why, you should seriously reconsider your project.
Otherwise you might fall prey to the unilateralist’s curse—most people think your project is not worth funding, possibly because it has some risk of causing harm (either directly or indirectly by stopping others from taking up a similar space) but you only need one person who is not dissuaded by that.
I like the reduction of high time costs and specialization of trade, but a small pool of funders means that if (a) they don’t have time for you, your project dies and (b) if they don’t share your theory of change, your project dies.
On (a), it does seem like staff time bottlenecks have prevented a lot of funding from going to a lot of good projects (see EA Funds).
On (b), I admit that there’s a fine line between “this person is wrong and their project just shouldn’t happen” to “this person has a good idea but it just isn’t recognized by the few funders”. It does seem to me, however, that the current funding system does have some groupthink around certain policies (e.g., “hits based giving”) that may not universally select every good project and reject every bad project. It would be nice for there to be somewhat more worldview diversification in what can get funded and I’m seeing a lot of gaps here.
Maybe my view of the landscape is naive, but it appears to me that a lot of spaces these days have effectively just one or two funders that can actually fund a project (e.g., Elie for poverty interventions, Lewis + ACE for nonhuman animal interventions, Nick for AI interventions, and Nick + CEA for community projects and I imagine these two groups confer significantly). I don’t think we need dozens of funders, but I think the optimal number would be closer to three or four people that think somewhat differently and confer only loosely, rather than one or two people.
We do not disagree much then! The difference seems to come down to what the funding situation actually is and not how it should be.
I see a lot more than a couple of funders per cause area—why are you not counting all the EtGers? Most projects don’t need access to large funders.
Glad to hear we agree! :)
I’m a bit out of the loop, but my assumption is that there are far fewer EtGers these days and that they’re not easy to find. I’m unsold that a crowdfunding platform is a good solution, but I do think that identifying funders for your project is not an easy task, and there might be opportunity around improving the diversity and accessibility of this ETG pool.
As Peter hints at below and which I’ve mentioned in another comment, the problem appears to be as soon as smaller donors receive info about a project having a funding application rejected by a more influential funder, such as the EA Grants, they reject them. So what some projects are experiencing isn’t the serial rejection of three independent funders, but rejection after it becomes common knowledge the first funder rejected them. The problem appears to be the funders with the most money or best affective reputation in EA are implicitly assumed to have the soundest approaches for assessing projects as well, which shouldn’t be the case.
At least in academia, success rate on proposals might only be 10% or 20%. And there is varying alignment between funders and your goals. So you would need to get a lot of rejections to have confidence that is not a good idea. But I can see it could be fewer rejections for aligned EA donors.
One small point: The title of this is, “Should there be an EA crowdfunding platform?.” You list the costs & benefits in the post, which is fantastic, but I’m not a big fan of the wording of the title. “Should” is a really messy word, what I expect this means is more specific, like “Does the expected value of this project outweigh the opportunity costs?” A similar, shorter, title could be something like “What is the net benefit of an EA crowdfunding platform?”.
This comment is in part for others seeing this who may make similar posts in the future.