GWWC Should Require Public Charity Evaluations
One of the roles of Giving What We Can (GWWC) is to help its members and other interested people figure out where to give. If you go to their site and click â start givingâ they list charitable funds, including GiveWellâs All Grants Fund and the EA Infra Fund, both of which our family donated to in 2022. Showing funds first reflects their view (and mine!) that most people should donate via funds: a grantmaker with time and resources to determine where money is most needed can generally do a better job allocating funds than an individual.
One of the big downsides of donating via a fund, however, is that you have to trust its grantmakers are allocating your money in line with what you would want. Perhaps they have different values or ways of thinking about uncertainty, risk, and evidence. Or perhaps theyâre just not very good at their job. One of GWWCâs roles is evaluating these funds, helping people figure out who to trust, but if youâre more skeptical GWWC also recommends individual charities.
They maintain a list of charity evaluators they trust, and if one of those evaluators recommends a charity then GWWC will list it prominently on their site and badge it as âtop ratedâ. You can see these on GWWCâs donating page if you scroll down past the funds.
There was recently some discussion on the EA Forum around one of these evaluators, Founders Pledge, and one of their recommended charities, StrongMinds. In March 2019, Founders Pledge performed a detailed investigation of StrongMinds, decided that their work treating depression in low-income countries was highly cost-effective, and wrote up a public evaluation explaining this decision (summary, details). GWWC then listed StrongMinds as a top-rated charity. All makes sense.
While Founders Pledge has continued to follow StrongMindsâ work and stands by their recommendation, they havenât had the resources to update their public review. Since Founders Pledge continues to recommend StrongMinds, GWWC continues to consider it a top-rated charity.
This is not a great situation: if you want to be giving to individual charities because you donât trust grantmakers deciding privately what most needs funding, you donât want to be taking Founders Pledgeâs word that StrongMinds is still a highly cost-effective opportunity. How has their funding outlook changed over the last nearly four years? Have there been more recent studies on their work or on this kind of intervention?
A case with even less public information is Suvita. GWWC says they recommend Suvita because Founders Pledgeâs Global Health and Development Fund made a grant there in July 2022. GWWC links to that fundâs Q2 2022 grants writeup which has a single paragraph on Suvita.
I think what Founders Pledge is doing here is fine; this is a reasonable level of transparency for a fund making a $50k grant. On the other hand, for a charity that GWWC is promoting directly to donors itâs very little to back up a designation of âtop ratedâ.
(After sharing a draft of this post with Founders Pledge they linked me to a more detailed writeup on Suvita, but it isnât currently linked from the rest of their site or from GWWC.)
On the EA Forum I proposed that one of GWWCâs requirements for endorsing recommendations from their trusted evaluators be that theyâre supported by current public evaluations. In the case of StrongMinds, once Founders Pledgeâs public evaluation became stale GWWC could have removed the âtop ratedâ badge. GWWCâs response was that they thought their current policy was correct because âour goal is primarily to provide guidance on what are the best places to give to according to a variety of worldviews, rather than what are the best explainable/âpublicly documented places to give.â
In this case, I donât think this should be their goal. The biggest advantage I see to GWWC pointing people to specific charities, not just funds, is that this simpler approach supports people in directing their money effectively even if they donât trust the private decisions of evaluators. This doesnât work without recommendations being backed by reasonably detailed public current evaluations.
Note that this doesnât require that most donors read the evaluations: lower-trust donors still (rightly!) understand that their chances of funding work thatâs pretty different from what they thought they were funding are much lower if an evaluator has written up a public case. On the other hand, there are several reasons why a donor willing to take an evaluatorâs word for how effective a charity is might still prefer to donate to an individual charity instead of a fund:
Taxes. Donations to, for example, StrongMinds are tax-advantaged in 22 countries while donations via the EA Funds platform are only tax-advantaged in 3. If the fund is planning on granting to charity X this year, then you donating to X has similar effects to donating to the fund.
Preference adjustments. Perhaps you agree with a fund in general, but you think they value averting deaths too highly relative to improving already existing lives. By donating to one of the charities they typically fund that focuses on the latter you might shift the distribution of funds in that direction. Or maybe not; your donation also has the effect of decreasing how much additional funding the charity needs, and the fund might allocate more elsewhere.
Ops skepticism. When you donate through a fund, in addition to trusting the grantmakers to make good decisions youâre also trusting the fundâs operations staff to handle the money properly and that your money wonât be caught up in unrelated legal trouble. Donating directly to a charity avoids these risks.
These are real concerns, but theyâre the kind of concerns sophisticated and committed donors are likely to have. These are the kind of people who are much less likely to put a lot of weight on a âtop ratedâ badge, or to be on the fence about whether to donate. Supporting donors in these kinds of situations is good, but that mostly just requires listing the charities, not marking them as âtop ratedâ. Overall, I still think limiting the âtop ratedâ badge and promotion to charities that have current public evaluations is the right choice for GWWC.
Disclosure: my wife used to be President of GWWC, but I havenât run
this post by her and I donât know what she thinks of this proposal. I
sent a draft of this post to GWWC and Founders Pledge; thanks to Sjir
at GWWC for discussion on the Forum that led to this piece, and to
Matt at Founders Pledge for his quick responses.
- 11 Jan 2023 22:50 UTC; 22 points) 's comment on GWWCâs HanÂdling of ConÂflictÂing FundÂing Bars by (
- EA & LW FoÂrum SumÂmaries (9th Jan to 15th Jan 23â˛) by 18 Jan 2023 7:29 UTC; 17 points) (LessWrong;
- 24 Feb 2023 19:24 UTC; 15 points) 's comment on ConÂsent Isnât Always Enough by (
- EA & LW FoÂrum SumÂmaries (9th Jan to 15th Jan 23â˛) by 18 Jan 2023 7:29 UTC; 14 points) (
Thanks again for this suggestion Jeff! However, for reasons mostly outlined in my comment here (under (4)) GWWCâs position remains that we should not restrict charity recommendations only to those who have a recent public evaluation available. Iâd be interested in any more arguments coming out of this discussion that would update our view though, and these could feed into a revision of our inclusion criteria later this year.
Thereâs one thing Iâd like to addâbased on the emphasis of your new post: as you mention, there are multiple reasons why people choose to donate to charities over funds, even while we generally think that donating to funds will be the higher-impact option. I think I have lower credence than you seem to have in ânot trusting fundsâ being the most prominent one, but even if it is, I donât think the current situation is problematic for donors for whom this is the main reason: those donors can easily see whether a particular top-rated charity has a recent public evaluation available (e.g. this will be highlighted on its charity page on the GWWC website), and adjust their decisions accordingly. By keeping the current policy, the âtop-ratedâ label remains representative of where we expect money will actually do the most good, rather than it being adjusted for a subgroup of donors who have a lower trust in funds.
(As an aside, I donât see why the other reasons you mention for giving to charities (e.g. tax deductibility) would be more characteristic of âsophisticated and committedâ donors than having a view on whether or not to trust particular evaluators/âfunds)
Could you define what counts as sufficiently âcurrentâ in your view?
I am a bit concerned that there are already practices that significantly favor larger organizations, such as GiveWell only considering orgs with tons of room for more funding for its top charity status. Itâs not cost-effective for evaluators to devote considerable resources to updating evaluations on midsize organizations very often. And there are downsides to putting all the eggs in a few baskets, which I fear a demanding currentness requirement would promote.
I think I would ask the recommending evaluator to affirm every X years that there are no known significant/âmaterial changes from the public evaluation, and have an absolute sunset after Y yearsâbut X and Y would differ to some extent on current organization size. Otherwise, I would model a decay function based on the date of the last public evaluationâthe assumed cost-effectiveness reduces over time as the amount of potential information not in the public analysis grows, and removal is triggered when the adjusted effectiveness no longer meets GWWCâs bar.
In general, I think it makes more sense for smaller organizations to be supported via funds, where grantmakers can engage directly with organization leadership.
On staleness, I would go the other way: a review of a small, agile, or quickly growing organization goes stale a lot faster than a larger and more stable one.
I like your first idea in theory, but I think you have to have enough varied funds in place first. Of the four recommended funds in Global Health & Development, all are GiveWell at their core and even GiveWell All Grants is an estimated 75 percent to GiveWell top charities per https://ââwww.givewell.org/ââresearch/ââall-grants. So everyone who doesnât score well on GiveWellâs system is going to get shut entirely out under that approach. This is absolutely not a criticism of GiveWell, which does what it is intended to do very well.
On your second point, I also agree in theoryâI think SMâs growth in room for funding is one reason I qualitatively find the public report a bit stale. But how quickly to age these out loops, in part, back to whether the funds are diverse enough and willing enough to fund a range of small/âmidsized organizations.
Iâll just add that from SoGiveâs perspective, this proposal would work. We have various views on charities, but only the ones which are in the public domain are robustly thought through enough that we would want an independent group like GWWC to pick them up.
The publication process forces us to think carefully about our claims and be sure that we stand by them.
(I appreciate that Sjir has made a number of other points, and Iâm not claiming to answer this from every perspective)
SoGive is not currently on GWWCâs list of evaluatorsâGWWC plans to look into us in 2023.
âOne of the roles of Giving What We Can (GWWC) is to help its members and other interested people figure out where to give.â Is this a recent addition to the GWWC mission statement? Iâve been a member for a while and wasnât under the impression that GWWC was in the business of doing charity evaluations or meta-charity/âfund evaluations. I assumed GWWC always emphasized the pledge, why to give, how much to give, but not saying much about where to give beyond pointing to GiveWell or whatever. If a big component of GWWC has always been about where to give, I must have missed that. Has GWWC emphasized the where to give piece more in recent years?
I think the current thinking is: âEvaluating the evaluatorsâ: GWWCâs research direction
Initially GWWC did their own charity evaluation, and had some public disagreements with GiveWell (ex: GWWCâs 2014 Why we (still) donât recommend GiveDirectly). Sometime around 2016 (compare team archive snapshots in mid-2016 with mid-2017) GWWC disbanded their research department, and then stopped having full time staff. In 2020 Luke took over GWWC leadership and my interpretation is the âevaluating the evaluatorsâ direction was started under Luke.
EDIT: I had tried to find a GWWC blog post about getting out of research, but it turns out it was a CEA post:
https://ââwww.centreforeffectivealtruism.org/ââblog/ââcea-strategic-update
YepâJeffâs pretty much captured it all here.
GWWCâs mission is to âmake giving effectively and significantly a cultural normâ and the pledge plays a big part in that, as does advocating for and educating about effective giving.
Supporting donors/âmembers in giving effectively has always been a part of GWWC but what thatâs looked like has changed over the years (from very detailed charity evaluation through to just linking off to GiveWell/âACE/âEA Funds when there was no one working full time on GWWC).
Thanks for the clarification!
I took the pledge in 2016 which coincided with when the research department disbanded per Jeffâs comment. I think that explains why I perceived GWWC to not be in the business of doing evaluations. Glad to see âevaluate the evaluatorsâ is working its way back in.