Agreed (though personally I might be willing to make a bet if e.g. fund manager selection is done well)
Sjir Hoeijmakers
Where are the GWWC team donating in 2023?
Very excited about this, both about the clarification of scope and the scope itself.
I strongly agree there is currently a gap in terms of principles-first EA funders, and also largely agree with the way you’ve outlined “principles-first EA” here. I think this new scope will make me seriously consider becoming a donor to the EAIF in the new year.
That’s great to hear Jonas, please let us know if we can do anything else to help! As mentioned in our reports and back when we announced the project, part of the motivation for doing this work is to support other effective giving organisations like Giv Effektivt to be able to make more informed decisions on their recommendations.
And yes, agree that these comments provide a bit of the next layer… let’s see where it stops!
Thanks Andrew. I hope I answered most of your question by my response to MHR above, but on the EV part: (caveating that I am not speaking on behalf of EV here nor have legal expertise on the governance question, but giving my personal understanding of the situation here)
GWWC and EA Funds are separate projects within EV; are managed separately; and communicate separately. I would be surprised if we were to discontinue supporting the EA Funds on our donation platform, given they clearly meet our inclusion criteria, but there is no need/pressure for us to recommend EA Funds (e.g. we currently don’t recommend the EA GHD Fund nor the EA Infrastructure Fund, as we haven’t looked into them yet). We acknowledge the conflict of interest, but I hope our reports on the EA AWF and EA LTFF show we are not holding back on pointing out where we think EA Funds can improve.
As I understand it, there are legal restrictions EV (including GWWC and EA Funds) has to obey, and if EA Funds would ever allocate funding in ways that aren’t in accordance with EV’s stated purpose that would obviously have consequences, but I’d expect those types of situations won’t have much to do with GWWC in particular.
That’s about as much as I know to say on this; hope it answers your question!
Thank you! Great question. I can’t speak on behalf of EA Funds and their plans going forward, but I can say our new GWWC cause area funds are meaningfully different from their funds (at least as they’ve been operating so far).
The biggest differences IMO are
The EA Funds generally (with the exception of the GHD Fund) only make grants to organisations that apply for funding with them.
The EA Funds are managed by a limited set of expert grantmakers.
Our GWWC cause area funds, on the other hand, ultimately aim to cover recommendations and grantmaking by nearly all impact-focused evaluators and grantmakers, based on our evaluations of these evaluators, but we don’t accept any grant applications ourselves.
For instance, EA Funds currently doesn’t consider any of Founders Pledge’s or Longview’s evaluations or active search for high-impact opportunities to inform their grantmaking, whereas our GWWC Funds do (or will do in the case of Founders Pledge) and additionally consider EA Funds as a grantmaker/evaluator.
For the EA GHD Fund and the GWWC GH&W Fund in particular, the difference is currently less pronounced. This is because we ended up working with GiveWell based on our initial evaluations, and EA Funds has historically asked them to advise their fund as well. However, this could easily change in the short- to medium-term, e.g. we hope to evaluate both Happier Lives Institute and Founders Pledge next year as candidates for evaluators informing our GH&W Fund grantmaking in addition to GiveWell.
Hope that clarifies a bit! Happy to elaborate further on the differences if helpful.
Thank you! As we mention in the report, we’re grateful for how you’ve engaged with our evaluations process, and I think this comment is a good illustration of the open, constructive and collaborative attitude you’ve had throughout it. We look forward to re-evaluating ACE’s work next year, and in the meantime remain excited to host many of ACE’s funds and recommendations on our donation platform as promising opportunities for donors to consider.
Hi Moritz, yes if you ask me personally, I would currently lean towards recommending MG over a randomly picked ACE recommended charity, though I’m far from confident in this / it’s not a claim I would be able to justify to the extent we usually want to justify our recommendations as GWWC. It’s mainly based on my view that the difference between the AWF and MG is fairly small (both are broadly trying to make cost-effective grants and are getting promising applications on the margin), whereas our criticism of ACE’s charity evaluations process a bit more fundamentally challenges it coming up with highly cost-effective donation opportunities on the margin (though I also don’t want to overstate our conclusion there). I would furthermore guess that MG is/will be more funding-constrained relative to its aims/applications than most of ACE’s individual charity recommendations. (but really, this is a guess: note that I haven’t looked into the charity recommendations individually!)
Thanks for your question!
The important nuance here is that while we did not think ACE’s current charity evaluation process measures marginal cost-effectiveness to a sufficient extent to directly rely on ACE’s recommendations, that isn’t the same as the (stronger) claim that its recommendations are necessarily worse donation opportunities than the AWF or THL’s corporate campaigns, and it also isn’t the same as claiming that ACE’s process doesn’t track marginal cost-effectiveness at all.
We can’t say confidently how ACE’s (other) recommendations compare to the AWF or THL’s corporate campaigns, as we haven’t individually evaluated and compared them. So we want to offer donors who have the time and expertise to look into these promising individual charities the opportunity to do so and potentially donate to them if they find them to be maximising impact by their worldview, as we do for many more charities and funds on our platform that we can’t currently justify recommending (for instance because they haven’t been evaluated (yet)).
You may also be interested in our answer to this somewhat related question under the AMA post.
Thank you, Peter, we’re obviously very happy to hear this!
So by default, GFI, Sinergia, Fish Welfare Initiative, Kafessiz and DVF were all excluded from potentially being identified (which seems illogical, as there is no obvious reason to think that charities evaluated in 2022 would be less cost-effective)
Yes they were, as were any other charities than the three charities we asked ACE to send us more information on (based on where they thought they could make the strongest case by our lights). Among those, we think ACE provided the strongest case for THL’s corporate campaigns, and with the additional referral from Open Phil + the existing public reports by FP and RP on corporate campaigns, we think this is enough to justify a recommendation. This is what I meant by there indeed being a measurability bias in our recommendation (which we think is a bullet worth biting here!): we ended up recommending THL in large part because there was sufficient evidence of cost-effectiveness readily and publicly available. We don’t have the same evidence for any of these other charities, so they could in principle be as or even more cost-effective than THL (but also much less!), and without the evidence to support their case we don’t (yet) feel justified recommending them. We don’t have capacity to directly evaluate individual charities ourselves (including THL!), but continue to host many promising charities on our donation platform, so donors who have time to look into them further can choose to support them.
To put this differently, the choice for us wasn’t between “evaluating all of ACE’s recommendations” and “evaluating only THL / three charities” (as we didn’t have capacity to do any individual charity evaluations). The choice for us was between “only recommending the AWF” and “recommending both the AWF and THL’s corporate campaigns” because there happened to already be sufficiently strong evidence/evaluations available for THL’s corporate campaigns. For reasons explained earlier, we stand by our decision to prefer the latter over the former, even though that means that many other promising charities don’t have a chance to be recommended at this point (but note that this is the case in charity evaluation across cause areas!).
Given you only looked at three of the ACE 2023 recommendations (and you didn’t say which ones), I’m wondering how you can make such a strong claim for all of ACE’s recommended charities?
Could you clarify which “strong claim for all of ACE’s recommended charities” you are referring to? From the executive summary of our report on ACE:
We also expect the gain in impact from giving to any ACE-recommended charity over giving to a random animal welfare charity is much larger than any potential further gain from giving to the AWF or THL’s corporate campaigns over any (other) ACE-recommended charity, and note that we haven’t evaluated ACE’s recommended charities individually, but only ACE’s evaluation process.
On a slightly unrelated point: For the referral from OP, I would be curious to hear if you asked them “What is the most cost-effective marginal giving opportunity for farmed animal welfare” (to which they replied THL’s corporate campaigns) or something closer to “Do you think THL is a cost-effective giving opportunity on the margin?”
The latter, because a referral by OP on its own wouldn’t have been sufficient for us to make a recommendation (as we haven’t evaluated OP): for recommending THL’s corporate campaigns, we really relied on these four separate pieces of evidence being available.
I should have said “One of the top 2 marginal giving opportunities” but I still think I stand by my point that many experienced animal advocates would disagree with this claim, and it’s not clear that your charity recommendation work has sufficient depth to challenge that (e.g. you didn’t evaluate groups yourself), in which case it’s not clear why folks should defer to you over subject-matter experts (e.g. AWF, OP or ACE).
We’re not even claiming it is one of the top 2 marginal giving opportunities, just that it is the best recommendation we can make to donors based on the information available to us from evaluators. If you could point us to any alternative well-justified recommendations/evaluators for us to evaluate, we’d be all ears.
And we don’t claim people should defer to us directly on charity evaluations (again, we don’t currently do these ourselves!). Ultimately, our recommendations (including THL!) are based on the recommendations of the subject-matter experts you reference. The purpose of our evaluations and reports is to help donors make better decisions based on the recommendations and information these experts provide.
Thanks for your comments and questions, James.
Surely if you thought that EA AWF was a good evaluator or donation opportunity for donors, you would just let them manage the entirety of the fund? As then EA AWF would be able to distribute to THL if they actually thought THL was the most effective use of funds on the margin. And if not, even better, as they can give to more effective opportunities.
The short answer is “no”: we don’t think we can currently justify the claim that giving to the AWF is better than giving to THL’s corporate campaigns, or vice versa. We did indeed conclude from our evaluation that the AWF can likely use marginal funds cost-effectively, but that isn’t the same as deferring to them on all fronts (including because we also found significant room for improvement, as explained in the AWF report), nor does it imply the AWF is better at allocating extra capital than THL is.
I’m also curious why you felt the need to recommend at least one competitive alternative to the AWF, when the AWF itself is a fairly diversified fund? Arguably, you marked ACE down for similar reasoning in your evaluation of their Movement Grants (that they were spreading their grants across many groups rather than focusing mostly on the most effective groups)
Our goal is to provide recommendations to help donors maximise their impact from the perspective of a variety of worldviews, and it’s in that light that we decided to also recommend THL’s corporate campaigns: consider that someone else could have made an (I think justifiable) comment that is entirely the opposite of what you are saying, i.e. that we should only recommend THL because there we actually have some independent evidence of their intervention working and being highly cost-effective, which is lacking for many if not most of the projects the AWF funds (given the early stage of the AW charity evaluation space).
We criticized ACE MG not for making grants to multiple groups but for doing so at the seeming expense of expected impact. As mentioned above, we don’t think THL’s corporate campaigns are a worse donation opportunity than the AWF, and we think there may be donors who think it’s more cost-effective in expectation, for instance because they put less weight on the individual judgement of grantmakers (or on our judgement in evaluating AWF to be a good donation opportunity!) and think the publicly available evidence for THL is stronger.
Statements like this make me worry that this evaluation focused too much on the certainty of some positive impact, rather than maximising expected impact (i.e. measurability bias).
I think you’re right to worry about this—I do as well! - as I would say there is some implicit measurability bias in our recommendations. Most notably, we ended up recommending THL’s corporate campaigns over other ACE recommendations not because we have strong evidence that they are a better donation opportunity than any other individual ACE recommendation, but because they are the only one where we think we have sufficient evidence to justify recommending them.
However, this is importantly different from us prioritising certainty of some positive impact over maximum expected impact: THL’s corporate campaigns is our best guess donation opportunity to maximise expected impact (alongside the AWF). If we thought we could have easily justified any one of ACE’s other recommendations was better—or even just as good—from that perspective, we would have recommended them, but we currently can’t. And please note that “justifying” here isn’t about finding “certainty of positive impact”: we are looking for the expected value case (as we do for the AWF and our other recommendations as well).
As mentioned in the comment above, you would struggle to find many experienced animal advocates who would confidently recommend THL as the single best marginal giving opportunity. In reality, they would likely either advocate for a spread of groups using different approaches or just simply give to a fund (e.g. EA AWF or ACE).
This is a much stronger claim than we are making (THL’s corporate campaigns being the “single best marginal giving opportunity”): we think it’s one of the two best donation opportunities we can, from the information we have available, recommend to a broad set of donors to maximise their expected impact. We are not claiming that nobody could do better (certainly by their individual values/worldview!), and encourage donors to do their own (further) research if they have the time and expertise available. This is also why we host a broader selection of promising programs donors can look into and support on our donation platform.
Thanks Lauren for your question, and thanks Vasco for helping to answer it! I’ve replied to the comment under the post on our evaluations that I believe you’re referring to, and am happy to elaborate on any part of my answer there (and what’s in the report / what Vasco shared) if helpful.
Thanks for your question! We explain the general principles we used to choose which evaluator to investigate here, and go into our specific considerations for each evaluator in their evaluation reports.
For FP’s GCR Fund compared to LTFF and LLF specifically, some of the main considerations were (1) our donors had so far been donating most to the LTFF, so the stakes were higher there, and (2) Longview was one of the most-named options by other effective giving organisations as an evaluator they weren’t relying on yet but were interested in learning more about.
And yes there are other evaluators we’ve considered and are considering for future evaluations, some of which we mention throughout the reports. See here for an overview of the impact-focused evaluators making publicly available recommendations that we are currently aware of, and which we may consider in our next iterations of this project.
GWWC’s new recommendations and cause area funds
First of all, thank you for the extensive comments!
I can give more context during our AMA next week if helpful (I won’t have much time to engage in the coming few days unfortunately), but wanted to just quickly react to avoid a misunderstanding about our views here. I’ve copy-pasted from the relevant section from the report below:
To be clear, there are strong limitations to this recommendation:
We didn’t ourselves evaluate THL’s work directly, nor did we compare it to other charities (e.g., ACE’s other recommendations).
The availability of evidence here may be high relative to other interventions in animal welfare, but is still low compared to interventions we recommend in global health and wellbeing. We haven’t directly evaluated Open Philanthropy, Rethink Priorities, or Founders Pledge as evaluators.
We have questions about the external validity of the evidence for corporate campaigns, i.e. whether they are as cost-effective when applied in new contexts (e.g. low- and middle-income countries in Africa) as they seem to have been where the initial evidence was collected (mainly in the US and Europe).
We also have questions about the extent to which the evidence for corporate campaigns is out of date, as the Founders Pledge and Rethink Priorities reports are from more than four years ago and we would expect there to be diminishing returns to corporate campaigns over time, as the “low-hanging fruits” in terms of cost-effectiveness are picked first.
Taken together, all of this means we expect funding THL’s current global corporate campaigns to be (much) less cost-effective than the corporate campaigns in 2016-2017, which were evaluated in those reports.^1
^1 It is worth noting that Open Philanthropy confirmed to us that it thinks so as well: its referral is not a claim that funding THL’s corporate campaigns will be exactly as cost-effective as it probably was a couple of years ago, when THL achieved big wins on a small budget, but a claim that funding them is likely still among the most cost-effective options in the space, and that THL can productively use a lot of extra funding without strongly diminishing marginal returns to funding currently provided.So in short, we share your impression that THL’s work is (much) less cost-effective than it was a few years ago. We are aware of Open Phil’s views on this, and their referral of THL’s work to us took these diminished expected returns into account. The FP and RP reports weigh (much) less heavily in our recommendation of THL’s current work than ACE’s and OP’s recommendations, but we think those reports still provide a useful (and publicly accessible) reference on corporate campaigns as an intervention more generally.
Congratulations both to Zach for taking on this important role and to CEA for finding such a capable candidate! Based on my personal interactions with Zach, I’m excited to see where he’ll lead CEA and optimistic about him contributing to a strong, principles-based EA community. He seems to me a person both of high integrity and professionalism, who deeply cares about making the world a better place, and who is able to set and execute on a vision. From a GWWC perspective, also looking forward to collaborating with him in his new capacity on making effective giving and effective altruism principles more broadly more of a global norm!