Update on Effective Altruism Funds
This post is an update on the progress of Effective Altruism Funds. If you’re not familiar with EA Funds please check out our launch post and our original concept post. The EA Funds website is here.
EA Funds launched on February 28, 2017. In our launch post we said:
We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should continue and if so, in what form.
Our review of the evidence so far has caused us to conclude that EA Funds will continue past the three-month experiment in some form. However, details such as which funds we offer, who manages them, and the content and design of the website may change as a result of what we learn during the three month trial period to the end of May.
Below we review how EA Funds has performed since launch, we unveil our first round of grant recommendations by the fund managers, highlight some of the mistakes we’ve made so far, and outline some of our short-term priorities.
Traction of EA Funds so far
In our launch post we said:
The main way we will assess if the funds provide value to our community is total recurring donations to the EA Funds and community feedback.
We outline our traction on each of these dimensions below.
Donations
At the time of writing $672,925 has been donated to EA Funds with an additional $26,861 in monthly recurring donations. Of the total amount donated, $250,000 came from a single new donor that Will met, although EA Funds has received donations from 403 unique donors as well.
Stats on individual funds are provided below:
Fund Name |
Amount Donated |
Monthly recurring donations |
Global Health and Development |
$311,562 |
$10,529 |
Animal Welfare |
$161,824 |
$4,756 |
Long-Term Future |
$118,342 |
$8,151 |
EA Community |
$74,704 |
$3,156 |
The donation amounts we’ve received so far are greater than we expected, especially given that donations typically decrease early in the year after ramping up towards the end of the year.
We’ve also been impressed with the relative lack of slowdown in new donations over time. New projects typically experience a surge in usage and then a significant slowdown (sometimes called the Trough of Sorrow). While we’ve experienced slowdown since launch, we’ve also seen a steady stream of around 5-10 new donations per day to EA Funds.
Community feedback
We’ve mostly gauged community feedback through a combination of reading comments on our launch post, reading feedback on EA Funds on Facebook, and talking to people outside of CEA whose opinions we trust. While this way of gauging feedback is far from perfect, our impression is that community feedback has been positive overall. (Note: the claim that the community feedback has been positive overall has been disputed in the comments below.)
In addition, we’ve requested feedback from donors to EA Funds and from the community more generally through a Typeform survey. We ask the Net Promoter Score (NPS) question in both surveys and received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page). While we don’t take NPS (or our sampling method) too seriously, it provides some quantitative data to corroborate our subjective impression.
Some of the areas of concern we’ve received so far include:
Concerns about unsuccessful attempts to do something similar in the past
Concerns about the overall amount of money influences by Nick
Concerns about centralization leading to less diversity in funding and less funding of new projects
Concerns about creating dependency on Open Phil by charities
One additional area of concern is in donor’s response to the following question:
How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?
Responses were on a scale from 0 (not at all likely) to 10 (extremely likely). We only collected 23 responses to this question, but the average score was 7.6 (compared to an average of 8.7 on the NPS above question). Using the NPS scoring system we would get a 0 on this question (same number of promoters as detractors). This could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.
Our preference is that donors give wherever they have reason to believe their donation will do the most good. If EA Funds succeeds in getting donations but fails to first convince donors that it is the highest-impact donation option available, we would substantially reevaluate the project and how we communicate about it. We will continue to evaluate this as the project continues and as we gain more data.
Conclusion
The evidence so far has led us to conclude that EA Funds should continue after the three month trial period. We’ve been impressed with the community response both in terms of feedback and donations and are enthusiastic about the potential to further improve the donation options available over time.
Allocations from Fund Managers
We’re also excited to announce the first round of grant allocations from the fund managers. Details are provided below.
Global Health and Development Fund
By Elie Hassenfeld
I’m planning to allocate all of the funds in the Global Health and Development fund to the Against Malaria Foundation, consistent with GiveWell’s current recommendation to donors.
AMF’s ability to sign additional agreements to distribute malaria nets is currently hampered by insufficient funding.
In addition:
-
GiveWell’s Incubation Grants program has evaluated and recommended a handful of grants in the last few months. In each case, Good Ventures followed GiveWell’s recommendation, so I continue to believe that GiveWell’s Incubation Grants program is not hampered by insufficient funding.
-
I don’t currently know of any other global health and development opportunities that I believe are higher impact, in expectation, than AMF.
I don’t anticipate either of the above facts changing in the next 6 months, so I’m choosing to allocate all of the funds immediately.
Animal Welfare Fund
By Lewis Bollard
I’ve recommended disbursements for the first $180K donated to the fund. I’ll likely recommend funding fewer groups in future, but have recommended initial grants to nine groups for a few reasons:
-
I want to signal to donors the sort of things I’m likely to recommend via this fund, and signal groups that I think have (a) additional room for more funding by individual donors and (b) Open Phil can’t fully fund because we already account for much of their budgets, e.g. The Humane League and Compassion in World Farming USA.
-
I’m recommending a few new approaches that I’m not sure have significantly more room for funding than I’m proposing, e.g. the Effective Altruism Foundation and The Fórum Nacional.
-
I’m recommending some groups that I anticipate Open Phil may fill the funding of in future, so only want to fund the groups enough to expand in the meantime.
-
I want to maintain some diversity within this fund so that donors can support a diversity of approaches.
The Humane League ($30K)
Advocacy group. THL is one of two key campaigning groups responsible for the major recent US corporate wins for layer hens and broiler chickens. (The other is Mercy for Animals, which I’m not supporting via this Fund because I’m confident that major donors, including Open Phil, will fill its funding needs for now.) THL has also played a critical role in the global corporate campaign wins for layer hens, via the Open Wing Alliance, a grouping of 33 campaign groups that it organized. I’ve been consistently impressed by THL’s management, focus on staff and activist development, and wise use of funds across program areas. Open Phil already accounts for roughly half of THL’s budget, so dependence concerns may constrain our ability to fill its funding needs in future.
Animal Equality ($30K)
Advocacy group. Animal Equality does grassroots activism, corporate campaigning, and undercover investigations across Europe, the Americas, and India. I’ve been impressed by its constant updating based on evidence: first moving toward only farm animal welfare work, and later toward a focus on corporate campaigning. I also think that its co-founders Sharon Nunez and Jose Valle have a strong vision for building a grassroots movement globally. I think it has funding needs now that aren’t likely to be immediately met.
New Harvest ($30K)
Clean meat research group. I’m not sure what the odds are that we’ll ever develop price-competitive clean or cultured meat. The evidence I’ve seen has convinced me that we won’t have it in the next five years, as some boosters claim. But I think it’s plausible that we will in the next 20-50 years, and I think the odds of it ever being developed will depend on the funds invested in it now. I’m also excited about the Good Food Institute’s work in this space, but I think that big funders (including Open Phil) will fill GFI’s funding needs in the medium term. I think New Harvest fulfills an important and complementary role, and has more room for more funding.
The Effective Altruism Foundation ($30K)
Research on the welfare of animals in natural environments. This grant will fund the research on the welfare of wild animals done by researchers Ozy Brennan and Persis Eskander, which internal changes at EAF have resulted in a loss of funding for. I’ve been impressed with their recent research, which focuses on foundational questions like the best scientific methods for measuring the wellbeing of wild animals, and relatively non-controversial potential interventions, like more humane methods of pest control. I view this as an important and highly neglected cause, though I’m unsure how tractable it will be and think more research is needed.
The Fórum Nacional de Proteção e Defesa Animal in Brazil ($20K)
Advocacy group. The Fórum Nacional is Brazil’s largest animal protection network with 120+ affiliated NGOs (mainly companion animal groups). Advocates I trust credit the group with a key role in securing crate-free pledges from Brazil’s three largest pork producers, and more recently cage-free pledges from Brazil’s three largest mayo producers, amongst others. Open Phil already accounts for roughly half of the Fórum Nacional’s budget, so dependence concerns may constrain our ability to fill its funding needs in future, and I’m less optimistic that other donors will step in than I am for THL or CIWF USA given Brazil’s challenging fundraising environment.
Compassion in World Farming USA ($10K)
Advocacy group. CIWF USA is one of two corporate advocacy groups responsible for the major recent US corporate wins for layer hens and broiler chickens. (The other is the Humane Society of the US Farm Animal Protection campaign, which is harder to support via this fund because of fungibility concerns.) It’s now focused almost exclusively on winning further corporate welfare reforms for broiler chickens. Open Phil already accounts for roughly half of CIWF USA’s budget, so dependence concerns may constrain our ability to fill its funding needs in future.
The Albert Schweitzer Foundation in Germany ($10K)
Advocacy group. This group appears to have been instrumental in securing cage-free and other corporate pledges in Germany, as well as in advancing some policy reforms and institutional meat reduction efforts. It currently has funding needs which may be filled in the medium term.
Animal Charity Evaluators ($10K)
Charity evaluator. I like the work that ACE does to build a more effective farm animal movement through research, charity recommendations, and outreach to donors, researchers, and advocates. When I recommended this initial grant, ACE had significant room for more funding. I’m now more confident that funding gap will be filled by large funders, so it’s unlikely that I’ll direct more funds to ACE this year.
Otwarte Klatki in Poland ($10K)
Advocacy group. This young grassroots group appears to have helped achieve significant corporate reforms in Poland with a small budget and in a tough political environment. It currently has funding needs, though they may be filled in the medium term.
Long-Term Future Fund
By Nick Beckstead
The Long-Term Future Fund made one grant of $14,838.02 to the Berkeley Existential Risk Initiative (BERI).
-
How I got the idea: Andrew Critch, who created BERI, requested $50,000.
-
What it is: It is a new initiative providing various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support). It works as a non-profit entity, independent of any university, so that it can help multiple organizations and to operate more swiftly than would be possible within a university context. For more information, see their website.
-
Why I provided the funds: Key inputs to my decision include:
-
The basic idea makes sense to me. I believe that this vehicle could provide swifter, more agile support to researchers in this space and I think that could be helpful.
-
I know Critch and believe he can make this happen.
-
I believe I can check in on this a year or two from now and get a sense of how helpful it was. Supporting people to try out reasonable ideas when that seems true is appealing to me.
-
I see myself as a natural first funder to ask for new endeavors like this, and believe others who would support this would make relatively wise choices with their donations. I therefore did not check much whether someone else could have or would have funded it.
-
This seemed competitive with available alternatives.
-
-
I did not provide the full $50,000 from the Long-Term Future Fund because I didn’t have enough funding yet. I provided all the funding I had at the time. The remainder of the funding was provided by the EA Giving Group and some funds held in a personal DAF. (This illustrates complex issues of fungibility that I plan to discuss at a later date.)
Effective Altruism Community Fund
(At the time of writing the EA community fund had not made any grants)
Mistakes and Updates
Since the launch of EA Funds, we’ve made several mistakes which have led to several useful updates. We outline these below.
Understatement of EA Funds Risks
How we fell short: In our launch post (available here and here) we argue that donations to EA Funds are likely to be at least as good as Open Phil’s last dollar and that Open Phil’s last dollar may be higher value than the lowest-cost alternatively, namely, donating to GiveWell-recommended charities.
However, this argument did not sufficiently communicate that Open Phil is likely to donate its last dollar many decades in the future which adds a good deal of extra risk that does not exist for an option like donating to GiveWell-recommended charities.
How we’re improving: We’ve added some additional paragraphs about this issue to the “Why donate to Effective Altruism Funds” page. We also added an additional paragraph to the “Why might you choose not to donate to this fund?” page for the Animal Welfare, Long-Term Future, and EA Community funds which address the need to trust Open Phil in making donations to EA Funds. We added a similar, but much shorter paragraph addressing the need to trust GiveWell to the Global Health and Development fund page.
Poor content about EA Funds on the Giving What We Can website
How we fell short: Around a month after launch we added some information and recommendations for EA Funds to the Giving What We Can website (here, here, and here). This information endorsed EA Funds without linking to the arguments in favor of it and did not sufficiently highlight our belief that not all donors should give to EA Funds.
In addition, this recommendation was at odds with our public statement in our launch post that EA Funds was in a three-month test period. Some users were confused as to why we would recommend a project which we were still testing.
How we’re improving: We added a link to the “Why donate to EA Funds” page (or reproduced that content) on all three GWWC pages. We also added a sentence explaining that we do not think EA Funds is likely to be the highest impact for option all donors.
We’re also releasing this update post to explain how we’ve updated and why we feel comfortable recommending EA Funds to a wider pool of donors.
Potential issues and areas of uncertainty
-
We followed the YC mantra of “launch when you’re still slightly embarrassed” in deciding how quickly to launch EA Funds. This allowed us to move quickly and take EA Funds from concept to launch in less than a month, but also led us to launch a product with some software and content bugs. Since CEA has a more established brand than most startups and since we’re dealing with large amounts of money, it might have been appropriate to spend more time refining the product before launch.
-
We have struggled to find a balance between the desire to be careful and thorough in describing the reasons in favor of donating to EA Funds on the one hand and the desire to be user-friendly and appealing to newer donors on the other hand. Our current homepage likely leans too far in favor of being user-friendly and sparse on argumentation, but our launch post likely leaned too far in the direction of requiring lots of background context to understand. We’ll continue to work on striking the appropriate balance as EA Funds evolves including A/B testing some different options to get more information on what’s appropriate and useful.
-
The EA Funds user interface unintentionally nudges users in favor of splitting their donation between the available causes because it shows you all the options simultaneously and asks you to choose your allocation between then. It is an open question if donors should split between plausible options or donate entirely to the option they think is best in expectation. We’re currently evaluating options for how to either help donors think through the split versus no-split decision or to make the user interface less biased in favor of donation splitting.
Future plans
Below we highlight some of the near-term priorities for EA Funds.
Is the growth of EA Funds dependent on the growth of EA?
The success of EA Funds so far is primarily attributable to the size of the existing EA community. One important question is whether the growth of EA Funds will be dependent on the growth of the EA community or whether EA Funds can grow independently, and perhaps faster than the growth of EA.
If EA Funds can grow independent of EA, then it likely makes sense to spend a good deal of staff time and money working directly on improving the project and getting more money moving through the platform. If EA Funds primarily grows as EA grows, then it makes sense to spend staff time and money working on growing EA while making sure that the EA community knows about EA Funds.
We’re looking at three general options for growing EA Funds independently of growing the EA community: online marketing, engaging with high net worth donors and partnership development. We’ll be looking for low-cost ways to test tactics in each of these domains over the coming months while the organization’s main focus will be the EA Community. If none of these options look promising, then we’ll likely focus on growing the EA community while maintaining EA Funds as a donation option for EAs.
Adding new funds and new fund managers
In our launch post we said:
If we decide to proceed with the EA Funds project after the three month trial, our aim would be to have 50% or less of the Fund Managers be Open Phil Program Officers (although they may manage more than 50% of the money donated).
This continues to be an important goal for us. Internally we’ve discussed some ideas for what funds or fund managers we might add to accomplish this goal, but we haven’t settled on any firm plans. We plan to allocate more time to accomplishing this goal over the summer.
If you have ideas for funds or fund managers we might add, please fill out this form and/or email me at kerry@effectivealtruism.org.
The EA Funds infrastructure as a platform
Behind the scenes, donations to EA Funds go to CEA until the fund manager makes a grant recommendation at which point CEA donates the money to the recipient organization.
We choose this system over other options like using a separate organization to receive the money, using charity platforms like CauseVox, or setting up an independent donor-advised fund for several reasons. These include less administrative costs for us, more control over the user experience, lower fees with the possibility of negotiating even lower fees in the future, and tax deductibility in the US and UK through the same website and platform.
This system is designed such that it can scale beyond just collecting donations to EA Funds. For example, we could process donations to individual charities, we could help coordinate donor lotteries, we could process bequests, we could process birthday and holiday fundraisers, and more. In the short-term, we are replacing the Giving What We Can trust with EA Funds because CEA can make the same grants with fewer restrictions (more on this in our March update) and use EA Funds to process donations to individual charities for members. We’ll be looking for other ways to use this infrastructure to benefit the EA community.
- Red Teaming CEA’s Community Building Work by 1 Sep 2022 14:42 UTC; 296 points) (
- A Month’s Worth of Rational Posts—Feedback on my Rationality Feed. by 15 May 2017 14:21 UTC; 21 points) (LessWrong;
- 13 Jul 2018 8:52 UTC; 8 points) 's comment on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations by (
- 21 Apr 2017 22:32 UTC; 4 points) 's comment on Effective altruism is self-recommending by (LessWrong;
- 31 Aug 2017 21:26 UTC; 3 points) 's comment on Should EAs think twice before donating to GFI? by (
As much as I admire the care that has been put into EA Funds (e.g. the ‘Why might you choose not to donate to this fund?’ heading for each fund), this sentence came across as ‘too easy’ for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I’d be surprised if the trusted people you spoke with couldn’t think of any significant risks. I also think ‘largely positive’ reception does not seem like a good indicator. If a person like Eliezer would stand out as the sole person in disagreement, that should give pause for thought.
Even though the article is an update, I’m somewhat concerned by that it goes little into possible long-term risks. One that seems especially important is the consequences of centralising fund allocation (mostly to managers connected to OP) to having a diversity of views and decentralised correction mechanisms within our community. Please let me know where you think I might have made mistakes/missed important aspects.
I especially want to refer to Rob Wiblin’s earlier comment: http://effective-altruism.com/ea/17v/ea_funds_beta_launch/aco
This is my largest concern as well. As someone who looks for funding for projects, I’ve noticed a lot of donors centralizing around these funds. This is good for them, because it saves them the time of having to evaluate, and good for me, because it gives me a single place to request funding. But if I can’t convince them to fund me for some reason and I think they’re making a mistake, there are no other donors to appeal to anymore. It’s all or nothing.
The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.
That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn’t fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I’m not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I’m open to ideas on how to help with this concern.
Quick (thus likely wrong) thought on solving unilateralist’s curse: put multiple position in charge of each fund, each representing a different worldview, and give everyone 3 grant vetoes each year (so they can prevent grants that are awful in their worldview). You can also give them control of a percentage of funds in proportion to CEA’s / the donor’s confidence in that worldview.
Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements? I’d like that a lot.
Serious question: What do you think of N fund managers in your scenario?
I don’t understand the question.
Allocating grants according to a ranked preference vote of an arbitrary amount of people (and having them write up their arguments); what is the optimal number here? Where is the inflection point where adding more people decreases the quality of the grants?
On tertiary reading I somewhat misconstrued “three fund managers” as “three fund managers per fund” rather than “the three fund managers we have right now (Nick, Elie, Lewis)”, but the possibility is still interesting with any variation.
That’s a good question. I did intend “three fund managers” to mean “the three fund managers we have right now”, but I could also see the optimal number of people being 2-3.
I’m not sure that’s true. There are a lot of venture funds in the Valley but that doesn’t mean it’s easy to get any venture fund to give you money.
There’s no shortage of bad ventures in the Valley: https://thenextweb.com/gadgets/2017/04/21/this-400-juicer-that-does-nothing-but-squeeze-juice-packs-is-peak-silicon-valley/#.tnw_Aw4G0WDt
http://valleywag.gawker.com/is-the-grilled-cheese-startup-silicon-valleys-most-elab-1612937740
Of course, there are plenty of other bad ventures that don’t get funding...
Every time in the past week or so that I’ve seen someone talk about a bad venture, they’ve given the same example. That suggests that there is indeed a shortage of bad ventures—or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are “bad” in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)
Or that there’s one recent venture that’s so laughably bad that everyone is talking about it right now...
It’s not clear that Juicero is actually a bad venture in the sense that doesn’t return the money for it’s investors.
Even if that would be the case, VC’s make most of the money with a handful companies. A VC can have a good fund if 90% of their investments don’t return their money.
I would guess that the same is true for high risk philanthropic investments. It’s okay if some high risk investments don’t provide value as long as you are betting on some investments that deliever.
I don’t have the precise statistics handy, but my understanding is that VC returns are very good for a small number of firms and break-even or negative for most VC firms. If that’s the case, it suggests that as more VCs enter the market, more bad companies are getting funded.
This is a huge digression, but:
I’m not sure it’s obvious that current VCs fund all the potentially top companies. If you look into the history of many of the biggest wins, many of them nearly failed multiple times and could have easily shut down if a key funder didn’t exist (e.g. Airbnb and YC).
I think a better approximation is an efficient market, in which the risk-adjusted returns of VC at the margin are equal to the market. This means that the probability of funding a winner for a marginal VC is whatever it would take for their returns to equal the market.
Then also becoming a VC, to a first order, has no effect on the cost of capital (which is fixed to the market), so no effect on the number of startups formed. So you’re right that additional VCs aren’t helpful, but it’s for a different reason.
To a second order, there probably are benefits, depending on how skilled you are. The market for startups doesn’t seem very efficient and requires specialised knowledge to access. If you develop the VC skill-set, you can reduce transaction costs and make the market for startups more efficient, which enables more to be created.
Moreover, the more money that gets invested rather than consumed, the lower the cost of capital in the economy, which lets more companies get created.
The second order benefits probably diminish as more skilled VCs enter, so that’s another sense in which extra VCs are less useful than those we already have.
I don’t think the argument that there are a lot of VC firms that don’t get good returns suggest that centralization into one VC firm would be good. There are different successful VC firms that have different preferences in how to invest.
Having one central hub of decision making is essentially the model used in the Soviet Union. I don’t think that’s a good model.
Decentral decision making usually beats central planning with one single decision making authority in domain with a lot of spread out information.
I hadn’t considered the unilateralist’s curse and I’ll keep this in mind.
To what extent do you think it’s sustainable to
a) advocate for a centralised system run by trusted professionals VS.
b) building up the capacity of individual funders to recognise activities that are generally seen as problematic/negative EV by cause prioritisation researchers?
Put simply, I wonder if going for a) centralisation would make the ‘system’ fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who’d approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they’re much better off handing over their money and employing their skills elsewhere.
I admit I don’t have a firm grasp of unilateralist’s curse scenarios.
This is an interesting point.
It seems to me like mere veto power is sufficient to defeat the unilateralist’s curse. The curse doesn’t apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it’s useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don’t need to centralize power of action, just power of veto.
That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.
On second thought, perhaps it’s just an issue of framing.
Would you be interested in an “EA donors league” that tried to overcome the unilateralist’s curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You’d get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)
[pollid:7]
(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)
Every choice to fund has false positives (funding something that should not have been funded) and false negatives (not funding something that should have been funded). Veto power only guards against the first one.
Kerry’s argument was that centralization helps prevent false positives. I was trying to show that there are other ways to prevent false positives.
With regard to false negatives, I would guess that centralization exacerbates that problem—a decentralized group of funders are more likely to make decisions using a diverse set of paradigms.
The unilateralist’s curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.
The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
The kind of set-up where it would apply:
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.
Our friend presiding over Machine Doggo Fund I’m sure would be interested to here about heterodox or contrarian advice to hedge against the centralization of EA philanthropy. I know a few effective altruists whose advice on giving he’d respect. That’d it consolidate his image as some cross between an edgy renegade and a folk hero, a sort of Batman of earning to give, can only help the case we could make.
I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I’d read on Facebook and was not thinking about responses to the initial launch post.
I agree that it’s not fair to say that the criticism have been predominately about website copy. I’ve changed the relevant section in the post to include links to some of the concerns we received in the launch post.
I’d like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you’d particularly like to see covered?
I forgot to do a disclosure here (to reveal potential bias):
I’m working on the EA Community Safety Net project with other committed people, which just started on 31 March. We’re now shifting direction from focusing on peer insurance against income loss to building a broader peer-funding platform in a Slack Team that also includes project funding and loans.
It will likely fail to become a thriving platform that hosts multiple financial instruments given the complexities involved and the past project failures I’ve seen on .impact. Having said that, we’re aiming high and I’m guessing there’s a 20% chance that it will succeed.
I’d especially be interested in hearing people’s thoughts on structuring the application form (i.e. criteria for project framework) to be able to reduce Unilateralist’s Curse scenarios as much as possible (and other stupid things we could cause as entrepreneurial creators who are moving away from the status quo).
Is there actually a list of ‘bad strategies naive EAs could think off’ where there’s a consensus amongst researchers that one party’s decision to pursue one of them will create systemic damage on an expected value basis? A short checklist (that I can go through before making an important decision) based on surveys would be really useful to me.
Come to think of this: I’ll start by with a quick Facebook poll in the general EA group. That sounds useful for compiling an initial list.
Any other opinions on preventing risks here are really welcome. I’m painfully aware of my ignorance here.
I haven’t looked much into this but basically I’m wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.
In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I’m really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)
My earlier comment seems related to this:
(Btw, I admire your openness to improving analysis here.)
Excellent point.
My suggestion for increasing robustness:
Diverse fund managers, and willingness to have funds for less-known causes. A high diversity of background/personal social networks amongst fund managers, and a willingness to have EA funds for causes not currently championed by OPP or other well known orgs in the EA-sphere could be a good way to increase robustness.
Do you agree? And what are your thoughts in general on increasing robustness?
One note on this: blockchain-based DAOs (decentralized autonomous organizations) are a good way to decentralize a giving body (like EAFunds). Rhodri Davies has been doing good work in this space (on AI-led DAOs for effective altruism). See https://givingthought.libsyn.com/algorithms-and-effective-altruism or my recent overview of EA + Blockchain: https://medium.com/@RhysLindmark/creating-a-humanist-blockchain-future-2-effective-altruism-blockchain-833a260724ee
I’m worried that this impairs our ability to credibly signal that we are not a scam. Originally we could say that we didn’t want any money ourselves—we were just asking for donations to third parties. Then we started asking for money directly, but the main focus was still on recommending donations to third parties. But now the main advice is to give us money, which we will then spend wisely (trust us!). It seems that outsiders could (justifiably) find this much less persuasive.
I agree that people new to EA could find EA Funds much less persuasive than the previous donation recommendations we used. I expect that we’ll find out whether or not this is true as we work on expanding EA Funds outside of EA. If non-EAs don’t want to use EA Funds, then we’ll probably want to lead with other examples of how people select effective donation options.
I’m shocked that no one has commented on Elie Hassenfeld distributing 100% of money to GiveWell’s top charity. Even if he didn’t run GiveWell, this just seems like an extra step between giving to GiveWell. But given that one of the main arguments for the funds was to let smaller projects get funded quickly and with less overhead, giving 100% to one enormous charity with many large donors is clearly failing at a goal.
I would guess that $300k simply isn’t worth Elie’s time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations. It seems to me the obvious thing is to is have the fund managed by someone who has the time to do so, rather than make another way to give money to GiveWell.
This is consistent with the optionality story in the beta launch post:
However, I do think this suggests that—to the extent to which GiveWell is already a known and trusted institution—for global poverty in particular it’s more important to get the fund manager with the most unique relevant expertise than a fund manager with the most expertise.
It’s worth noting that it’s all pretty fungible anyway. GiveWell could have just as easily claimed the money was going toward an incubation grant and then put more incubation grant money toward AMF.
This seems like an excellent reason to have someone uninvolved with an existing large organization administer the fund.
On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don’t want to advocate contrarianism for contrarianism’s sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell’s ability to implement its founders’ values). Since he didn’t, that’s at least weak evidence that AMF is the best global poverty funding opportunity we know about.
Overall I think it’s good that Elie didn’t feel the need to justify his participation by doing a bunch of makework. This is still evidence that channeling this through Elie probably gives a false impression of additional optimizing power, but I think that should have been our strong prior anyhow.
Only if GiveWell and the EA Fund are both supposed to be perfect expressions of Elie’s values. GiveWell has a fairly specific mission which includes not just high expected value but high certainty (compared to the rest of the field, which is a low bar). EA Funds was explicitly supposed to be more experimental. Like you say below, if organizers don’t think you can beat GiveWell, encourage donating to GiveWell.
Or to simply say “for global poverty, we can’t do better than GiveWell so we recommend you just give them the money”.
Agreed—it definitely seems reasonable to me, and very consistent with GiveWell’s overall approach, that Elie sincerely believes that donating to AMF is the best use of funds.
Not sure if this is the right place to say this, but on effectivealtruism.org where it links to “Donate Effectively,” I think it would make more sense to link to GiveWell and ACE ahead of the EA Funds, because GiveWell and ACE are more established and time-tested ways of making good donations in global poverty and animal welfare.
(The downside is this adds complexity because now you’re linking to two types of things instead of one type of thing, but I would feel much better about CEA endorsing GiveWell/ACE as the default way to give rather than its own funds, which are controlled by a single person and don’t have the same requirement (or ability!) to be transparent.)
Alternatively, you could have global poverty and animal welfare funds that are unmanaged and just direct money to GiveWell/ACE top charities (or maybe have some light management to determine how to split funds among the top charities).
I appreciate the information being posted here, in this blog post, along with all the surrounding context. However, I don’t see the information on these grants on the actual EA Funds website. Do you plan to maintain a grants database on the EA Funds website, and/or list all the grants made from each fund on the fund page (or linked to from it)? That way anybody can check in at any time to see how how much money has been raised, and how much has been allocated and where.
The Open Philanthropy Project grants database might be a good model, though your needs may differ somewhat.
We have an issue with our CMS which is making the grant information not show up on the website. I will include these grants and all future grants as soon as that is fixed.
First, thanks very much for this valuable transparency!
I notice the movement building fund hasn’t donated any money yet. I’m curious what the process for making grants from this fund will be?
Specifically, what steps is CEA and Nick (a trustee of CEA) going to take to recuse themselves from discussions in the movement building fund? Will CEA apply for money through the fund? Would there be any possibility of inappropriate pro-CEA bias if someone else applied for the fund wanting to do something similar to what CEA is doing or wants to do?
The current process is that fund managers send grant recommendations to me and Tara and we execute them. Fund managers don’t discuss their grant recommendations with us ahead of time and we don’t have any influence over what they recommend.
From a legal standpoint, money donated to EA Funds has been donated to CEA. This means that we need board approval for each grant the fund managers recommend. The only cases I see at the moment where we might fail to approve a grant would be cases where a) the grant violates the stated goals of the fund or b) where the grant would not be consistent with CEA’s broad charitable mission. I expect both of these cases to be unlikely to occur.
At the moment there isn’t really an application process. Any formal system for requesting grants would be set up by Nick without CEA’s input or assistance.
That said, CEA is a potential recipient of money donated to the EA Community fund. If we believe that we can make effective use of money in the EA Community fund we will make our case to Nick for receiving funding. Nick’s position as a trustee of CEA means that he has robust access to information about CEA’s activities, bnudget, and funding sources.
This is certainly possible. Because Nick talks to the other CEA trustees regularly, it is likely that he would know where other organizations overlap with CEA’s work and it is likely that he would know what CEA staff think about other oganizations. This might cause him to judge other organizations more unfavorably than he might if he was not a CEA trustee.
I think the appropriate outside view is that Nick will be unintentionally biased in CEA’s favor in cases where CEA conflicts with other EA community building organizations. My inside view from interacting with Nick is that he is a careful and thoughtful decision-maker who is good at remaining objective.
If you’re worried about pro-CEA bias and if you don’t have sufficient information about Nick to trust him, then you probably shouldn’t donate to the EA Community Fund.
I’m pleased to see the update on GWWC recommendations; it was perturbing to have such different messages being communicated in different channels.
However I’m really disappointed to hear the Giving What We Can trust will disappear—not least because it means I no longer have a means to leave a legacy to effective charities in my will (which I’ll now need to change). Previously the GWWC trust meant I had a means of leaving money, hedging against changes in the landscape of what’s effective, run by an org whose philosophy I agree with and whose decisions I had a good track record of trusting. EA funds requires I either specify organisations (which I can do myself in a will, but might not be the best picks at a relevant time), or trust a single individual in whom I don’t have the same confidence. Also if a legacy is likely to be a substantial amount of money I am more risk averse about where it goes.
Hi Bernadette,
We’re sorry that our communication on this has not been clear enough. We were waiting on some technical details so that we could infor Trust users of the exact changes and what they needed to do in advance but now I’ll communicate what we can today while Larissa Hesketh-Rowe is also going to email Giving What We Can members to make sure everyone is included.
In terms of the Trust we are moving all of the functionality the Trust had over to EA Funds which we believe will ultimately be a much better platform both for users and for us in terms of managing the administration.
You can leave a legacy in a similar way via EA Funds; as you mentioned you can allocate it to the Funds or to specific charities. However, you can also allocate it to GiveWell’s recommended charities as they stand at the time of granting your bequest. In practice this should be similar to how bequests were made to the Trust—keeping the Giving What We Can Trust running would still mean using GiveWell’s recommendation as Giving What We Can no longer conduct our own research. In our update to members at the end of July we explained that the restructure with CEA meant that while we would continue to run the core aspects of Giving What We Can like the pledge we felt that our research was not offering sufficient value over and above GiveWell’s and that we would therefore move to making our list of recommended charities follow GiveWell’s recommendations. We are still establishing what the Trust’s options are for handling and transferring bequests, to see whether it is necessary to ask donors to change their wills, or whether we can transfer them, along with their instructions and allocations, to CEA. We’ll look to communicate this as soon as we have more information.
It seems we’ve not communicated these changes clearly enough to members and so we’ll be seeking to address this over the next couple of weeks. Do please post any other questions you have or clarifications you’d like as we can use that to help inform what else we email to members.
Best wishes, Alison
Thanks again for writing about the situation of the EA Funds, and thanks also to the managers of the individual funds for sharing their allocations and the thoughts behind it. In light of the new information, I want to raise some concerns regarding the Global Health and Development fund.
My main concern about this fund is that it’s not really a “Global Health and Development” fund—it’s much more GiveWell-centric than global health- and development-centric. The decision to allocate all fund money to GiveWell’s top charity reinforces some of my concerns, but it’s actually something that is clear from the fund description.
From the description, it seems to be serving largely as a backup to GiveWell Incubation Grants (in cases where e.g. Good Ventures chooses not to fund the full amount) and as additional funding for GiveWell top charities.
Both the cited examples are recipients of GiveWell Incubation Grants, and in the pipeline for evaluation by GiveWell for top charity status. Even setting aside actual grantees, the value of the fund, according to the fund manager, is in terms of its value to GiveWell (emphasis mine):
The GiveWell-centric nature of the fund is fine except that the fund’s name suggests that it is a fund for global health and development, without affiliation to any institution.
Even beyond the GiveWell-as-an-organization-centered nature of the fund, there is a sense in which the fund reinforces the association of global health and development with quantifiable-and-low-risk, linear, easy buys. That association makes sense in the context of GiveWell (whose job it is to recommend linear-ish buys) but seems out of place to me here. Again quoting from the page about the fund:
There are two distinct senses in which the statement could be interpreted:
There is large enough room for more funding for interventions in global health that have a strong evidence base, so that donors who want to stick to things with a strong evidence base won’t run out of stuff to buy (i.e., lots of low-hanging fruit)
There’s not much scope in global health for high-risk but high-expected value investments, because any good buy in global health would have a strong evidence base
I’d agree with the first interpretation, but the second interpretation seems quite false (looking at the Gates Foundation’s portfolio shows a fair amount of risky, nonlinear efforts including new vaccine development, storage and surveillance technology breakthroughs, breakthroughs in toilet technology, etc.). The framing of the sentence, however, most naturally suggests the second interpretation, and moreover, may lead the reader to a careless conflation of the two. It seems to me like there’s a lot of conflation in the EA community (and penumbra) between “global health and development” and “GiveWell current and potential top charities”, and the setup of this EA Fund largely reflects that. So in that sense, my criticism isn’t just of the fund but of what seems to me an implicit conflation.
Similar issues exist with two of the other funds: the animal welfare fund and the far future fund, but I think they are less concerning there. With “animal welfare” and “far future”, the way the terms are used in EA Funds and in the EA community are different from the picture they’ll conjure in the minds of people in general. But as far as I know, there isn’t so much of an established existing cohesive infrastructure of organizations, funding sources, etc. that is at odds with the EA community.* Whereas with global health and development, you have things like WHO, Gates Foundation, Global Fund, and even an associated academic discipline etc. so the appropriation of the term for a fund that’s somewhat of a GiveWell satellite seems jarring to me.
Some longer-term approaches that I think might help; obviously they wouldn’t be changes you can make quickly:
(a) Rename funds so that the names capture more specifically the sort of things the funds are doing. e.g. if a fund is only being used for last-mile delivery of interventions as opposed to e.g. vaccine development, that can be specified within the fund name.
(b) Possibly have multiple funds within the same domain (e.g., global health & development) that capture different kinds of use cases (intervention delivery versus biomedical research) and have fund managers with relevant experience in the domains. e.g. it’s possible that somebody with experience at the Gates Foundation, Global Fund, WHO, IHME, etc. could do fund allocation in some domains of global health and development better for some use cases.
Anyway, these are my thoughts. I’m not a contributor (or potential contributor, in the near term) to the funds, so take with appropriate amount of salt.
*It could be that if I had deeper knowledge of mainstream animal welfare and animal rights, or of mainstream far future stuff (like climate change) then I would find these jarring as well.
Hey Vipul, thanks for taking the time to write this. I think I largely agree with the points you’ve made here.
As we’ve stated in the past, the medium-term goal for EA Funds to have 50% or less of the fund managers be Open Phil/GiveWell staff. We haven’t yet decided whether we would plan to add fund managers in new cause areas, add fund managers with different approaches in existing cause areas, or some combination of the two. Given that Global Health and Development has received the most funding, there is likely room for adding funds that take a different approach to funding the space. Personally, I’d be excited to see something like a high risk, high reward global health and development fund.
I probably disagree with changing the name of the fund right now as I think the current name does a good job of making it immediately clear what the fund is about. Because the UI of EA Funds shows you all the available funds and lets you split between them, we chose names that make it clear what the fund is about as compared to what the other funds are about.
If we added a fund that was also in Global Heath and Development, then it might make sense to change the current name of the Global Health and Development fund to make it clear how the two funds are distinct from one another.
By the way, if you know of solid thinkers in Global Heath and Development funding who are unaffiliated with GiveWell please feel free to email their names to me at kerry@effectivealtruism.org.
How much did you expect?
What percentage of funds was raised from people who are part of the EA community / identify as EAs, and what percentage of funds from people outside the community (e.g. Hacker News)?
(The launch post said that you’ll be “seeing how the concept is received outside of the EA community” so it would be nice to learn about that, too.)
In our post-donation survey, we ask whether people consider themselves a part of the EA Community. Out of 32 responses, 10 said no which indicates that around 1⁄3 of donors are new to EA.
However, donations from this group were generally quite small and some of them indicated that they had donated to places like AMF or GiveDirectly in the past. My overall guess is that the vast majority of money donated so far has been from people who were already familiar with EA.
I appreciate that the post has been improved a couple times since the criticisms below were written.
A few of you were diligent enough to beat me to saying much of this, but:
This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he’s aware that many people have in fact raised concerns about things other than communication and EA Funds’ website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that’s been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren’t OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds’ Fund Managers encounter.
I’ve spoken with a couple EAs in person who have mentioned that making the claim that “EA Funds are likely to be at least as good as OPP’s last dollar” is harmful. In this post, it’s certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it’s less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the “at least as good as OPP” slogan.
More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds “received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page).” But the first sentence of the Wikipedia page for NPS, which I’m sure the author read at least the first line of given that he linked to it, states that NPS is “a management tool that can be used to gauge the loyalty of a firm’s customer relationships” (emphasis mine). However, EA Funds isn’t a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company’s customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you’ve made this assumption) belies a lack of intent to honestly inform EAs.
This post has other problems, too; it uses the NPS scoring system to analyze donors and other’s responses to the question:
The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being “felt to be good” in industry. Worse, the post mentions that this result
It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I’d agree that it’s a good idea to not “take NPS too seriously”, though in this case, I wouldn’t say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.
I’m disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I’ve pointed out, which all point in the direction of making EA Funds look better than it is, things don’t look good. Things don’t look good regarding how well this project has been received, but that’s not the larger problem here. The larger problem is that things don’t look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.
Writing this made me cry, a little. It’s late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can’t trust anyone I haven’t personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.
Some days, I like to quietly smile to myself and wonder if we might be able to take that back.
I know you say that this isn’t the main point you’re making, but I think it’s the hidden assumption behind some of your other points and it was a surprise to read this. Will’s post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA’s target of $1m.
Given all that, what would ‘well-received’ look like in your view?
If you think the community is generally making a mistake in being supportive of the EA funds, that’s fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.
Yeah, in this community it’s easy for your data to be filtered. People commonly comment with criticism, rarely with just “Yeah, this is right!”, and so your experience can be filled with negative responses even when the response is largely positive.
In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don’t think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.
Part of what I’m tracking when I say that the EA community isn’t supportive of EA Funds is that I’ve spoken to several people in person who have said as much—I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn’t have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn’t a great deal of community support for EA funds has to do with the ways in which I’d expect the data on how much support there actually is to be filtered. For example:
the method in which Kerry presented his survey data made it look like there was more support than there was
the fact that Kerry presented the data in this way suggests it’s relatively more likely that Kerry will do so again in the future if given the chance
social desirability bias should also make it look like there’s more support than there is
the fact that it’s socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there’s more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.
It also doesn’t help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn’t show anything useful either way, I’m not comfortable with accepting the claim that EA Funds has been particularly well-received.
So I probably disagree with some of your bullet points, but unless I’m missing something I don’t think they can be the crux of our disagreement here, so for the sake of argument let’s suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.
...I still don’t see how to get from here to (for example) ‘The community is probably net-neutral to net-negative on the EA funds, but Will’s post introducing them is the 4th most upvoted post of all time’. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they ‘should’, why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry’s post introducing EA Ventures has just 16 upvotes*.
It just seems like the simplest explanation of your observed data is ‘the community at large likes the funds, and my personal geographical locus of friends is weird’.
And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you’re very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I’ve spoken to in person about the EA funds thinks they’re a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.
FWIW I’m very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don’t have any great ideas at the moment though.
*http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/
I’d say this is correct. The EA Forum itself has such a selection effect, though it’s weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist’s perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.
I’d also agree that, at the time of Will’s post, it would have been incorrect to say:
But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.
My view is further that the community’s response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I’m positing that the public response to EA Funds would be more negative if we hadn’t filtered certain people out of EA by having an integrity problem in the first place.
(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don’t feel like I have a satisfactory handle on it, but also feel I should reply at this point.)
This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not ‘the funds have not been well-received’, but rather ‘the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way’. Maybe that is what you were aiming for originally and I just didn’t read it that way.
True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I’m curious whether you personally a perceive a change (drop) in popularity in your circles?
This story sounds plausibly true. It’s a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I’m still not sure how much weight to put on it.
Perhaps a simple (random) survey? Or, if that’s not possible, a poll of some sort?
My sense (and correct me if I’m wrong) is that the biggest concerns seem to be related to the fact that there is only one fund for each cause area and the fact that Open Phil/GiveWell people are running each of the funds.
I share this concern and I agree that it is true that EA Funds has not been changed to reflect this. This is mostly because EA Funds simply hasn’t been around for very long and we’re currently working on improving the core product before we expand it.
What I’ve tried to do instead is precommit to 50% or less of the funds being managed by Open Phil/GiveWell and give a general timeline for when we expect to start making good on that committment. I know that doesn’t solve the problem, but hopefully you agree that it’s a step in the right direction.
That said, I’m sure there are other concerns that we haven’t sufficiently addressed so far. If you know of some off the top of your head, feel free to post them as a reply to this comment. I’d be happy to either expand on my thoughts or address the issue immediately.
Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will’s post, because I’m glad he posted it (and likewise Tara’s) - thanks!
That seems like a good use of the upvote function, and I’m glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts:
I don’t think it’s a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track ‘I agree with this’ extremely strongly, even though lots of people (myself included) agree that ideally they shouldn’t.
In the hypothetical where there’s lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence:
*Not sure if ‘forgot’ is quite the right word here, just mirroring your description of my comment as ‘reminding’ you.
Thanks for taking the time to provide such detailed feedback.
I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I’d read on Facebook and was not thinking about responses to the initial launch post.
I agree that it’s not fair to say that the criticism have been predominately about website copy. I’ve changed the relevant section in the post to include links to some of the concerns we received in the launch post.
I’d like to be as exhaustive as possible, so please provide links to any areas I missed so that I can include them (note that I didn’t include all of the comments you linked to if I thought our launch post already addressed the issue).
From my point of view, the context for the first section was to explain why we updated in favor of EA Funds persisting past the three-month trial before the trial was over. This was important to communicate because several people expressed confusion about our endorsement of EA Funds while the project was still technically in beta. This is why the first section highlights mostly positive information about EA Funds whereas later sections highlight challenges, mistakes etc.
I think the update that your comment is suggesting is that I should have made the first section longer and should have provided a more detailed discussion of the considerations for and against concluding that EA Funds has been well-received so far. Is that what you think or do you think I should make a different update?
A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people’s concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.
I’m concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn’t gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn’t update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you’ve framed things as if you’ll continue on with the project only if you update in the direction of having more public support than before.
I also dislike that you emphasize that some people “expressed confusion at your endorsement of EA Funds”. Some people may have felt that way, but your choice of wording both downplays the seriousness of some people’s disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they’re less competent than others who aren’t confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.
I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I’d been told earlier, privately and publicly—that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I’m confused, I’m confused about how this wasn’t just a lie. My initial response was “HOW IS THIS OK???” (verbatim quote). I’m willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement.
ETA: I’m happy with the update to the OP and don’t think I have any unresolved complaint on this particular wording issue.
In the OP Kerry wrote:
CEA’s original expectation of donations could just have been wrong, of course. But I don’t see a failure of logic here.
Re. your last paragraph, Kerry can confirm or deny but I think he’s referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn’t actually officially given the funds a ‘green light’ yet. So not referring to the same set of criticisms you are talking about. I think ‘confusion at GWWC’s endorsement of EA funds’ is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd.
*I got the e-mail from GWWC announcing this on 11th April. I got CEA’s March 2017 update saying they’d decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don’t read that.
https://www.centreforeffectivealtruism.org/blog/cea-update-march-2017/
Correct. We had updated in favor of EA Funds internally but hadn’t communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.
The money moved is the strongest new data point.
It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures—lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.
Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?
It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.
I think your concern is that since NPS was developed with for-profit companies in mind, we shouldn’t assume that a +50 NPS is good for a nonprofit.
If so, that’s fair and I agree.
When people benchmark NPS scores, they usually do it by comparing NPS scores in similar industries. Unfortunately, I don’t know of any data for NPS scores of nonprofits like ours (e.g. consumer-facing and providing a donation service). I think the information about what NPS score is generally considered good is helpful to understanding why we updated in favor of EA Funds persisting past the three month trial.
Is it your view that I a) shouldn’t have included NPS data at all or b) shoulnd’t have included information about what scores are good or c) that I should have caveated the paragraph more carefully?
I’m not sure I follow the concern here.
Are you arguing that a) the “OPP’s last dollar” content is not attempting to provide an argument or that b) it’s wrong to give an argument if the argument causes guilt as a side effect or are you arguing for something else?
I’d be willing to defend that it’s acceptable to make arguments for a position even if those arguments have the unintended consquence of causing guilt.
There are a range of reasons that this is not really an appropriate way to communicate. It’s socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling.
It’s a shame you’ve been upset. Still, one can call others’ writing upsetting, immoral, mean-spirited, etc etc etc—there is a lot of leeway to make other reasonable conversational moves.
Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.
I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps—particularly if it’s made explicit that this is a function of the reader and writing, and not the writing alone.
I’ve discussed this with Owen a bit further. How emotions relate to norms of discourse is a tricky topic but I personally think many people would agree on the following pointers going forward (not addressed to Fluttershy in particular):
Dos:
flag your emotions when they are relevant to the discussion. e.g. “I became sick of redrafting this post so please excuse if it comes across as grumpy”, or “These research problems seem hard and I’m unmotivated to try to work more on them”.
discuss emotional issues relevant to many EAs
Don’ts:
use emotion as a rhetorical boost for your arguments (appeal to emotion)
mix arguments together with calls for social support
mix arguments with personal emotional information that would make an EA (or regular) audience uncomfortable.
Of course, if you want to engage emotionally with a specific people, you can use private messages.
This is wholly speculative. I’ve seen no evidence that consequentialists “feel bad” in any emotionally meaningful sense for having made donations to the wrong cause.
Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would even be more subtle? Why can’t we control our feelings and not fall to pieces at the thought that we might have been responsible for injustice? The world sucks and when one person screws up, someone else is suffering and dying at the other end. Being cognizant of this is far more important than protecting feelings.
I think you ought to place a bit more faith in the ability of effective altruists to make rational decisions.
Thanks, this was really interesting.
Thanks for the post!
Lewis Bollard gave away 180k but Nick Beckstead says he only had access to 14k. Was this due to a spike in donations to the far future cause after they made their recommendations?
Nick’s recommendation came much sooner after launch than Lewis’s, so Nick had much less money available at the time.
This is excellent. How might you evaluate fund managers? Do new fund managers have to have an existing relationship with anyone on the team?
We’re still working on the process for adding new fund managers. New fund managers will not need to have a relationship with anyone on the team.
What does an ideal fund manager look like?
(Many questions because I’m really excited and think this is fantastic, and am really glad you’re doing it)
We haven’t decided this yet, but I can share my current guesses. I expect that we’ll be looking for fund managers who have worldviews that are different from the existing fund managers, who are careful thinkers, who are respected in the EA community and our likely pool of donors, and who are willing to devote a sufficient amount of time to manage the fund.
What is the internal process for adding a new fund or manager? What happens after the form is submitted—is it a casual discussion amongst the team, or something else?
Do you have any thoughts as to what the next funds added might be? Does the manager come first, or will you announce things you’d like to have funds in, where you don’t yet have a manager?
Unfortunately, we don’t have any details around this at the moment. We should have more to share once we devote more time to this question over the summer.