Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thanks for all you do.
I feel that changing the nature of the Maximum Impact Fund in this way should come with a renaming of the fund, since it is now no longer going all-out on expected value; whereas before it was “maximizing” expected “impact”, it’s no longer doing that. And many donors have come to expect that the MIF is the go-to for high EV donation, and will not notice this change.
Something like the ‘Top Charities Fund’ or ‘High Impact Fund’ flags the fundamental change, and is a bit less misleading.
What about “Direct Impact Fund”?
Hi, David(s),
Thanks for your feedback! We considered renaming the Maximum Impact Fund in the lead-up to these changes, but decided not to in the end. The Maximum Impact Fund has been a popular giving option that’s attracted a lot of new donors; we wanted to err on the side of not confusing these newer donors, who we believe associated this fund with high confidence rather than with high expected value, despite the name. For those like you who follow our work more closely, we expected the risk of confusion would be lower: we thought they’d be more likely to understand the differences in the funds and find it relatively straightforward to make the switch to supporting All Grants versus the Maximum Impact Fund, if their priority was high expected value.
All that said, while this was a considered decision, we recognize that it might not have been the right one. Part of our reason for introducing the All Grants Fund was to help clarify our giving options for donors, and it’d obviously be bad if keeping the Maximum Impact Fund’s name were introducing more confusion than clarity. We appreciate your feedback about this, and we’ll certainly think about whether this decision merits revisiting!
Best, Miranda
Similar thoughts crossed my mind (including the thanks!).
For a donor whose values are closely aligned with GiveWell’s and who trusts them to spend wisely on operations, it seems like an unrestricted donation might actually have the highest expected impact.
But it also seems like there’s potentially a funging “cascade” across the different options such that marginal donations would be equivalent under certain circumstances, depending on details like the Excess Assets Policy.
I’d be very interested in an in-depth comparison of the different options for giving through GiveWell in terms of expected impact, funging, optimizer’s curse, value alignment etc.
Hi, Andrew,
Apologies for the delay in responding!
It is a bit tricky to compare expected impact across the various funds. The tl;dr answer, without putting any real calculations into it, is that in practice, we don’t expect there to be large differences. But theoretically there could be, and if you can give unrestricted or less restricted (assuming you trust GiveWell), that’s probably better, as it allows us to deploy funding where it will be most impactful. Here are a few points to consider.
Because a large portion of our funding is either technically unrestricted or flexible, we think that in practice, it’s unlikely that our grantmaking to top charities or non–top charity programs will be constrained by the proportion of funding we receive for the Maximum Impact vs. All Grants fund. A lot of our funding comes from Open Philanthropy as flexible funding intended for grantmaking (so, very similar to funding from the All Grants Fund); this typically has gone to a mix of top charities and other programs. We also receive enough unrestricted funding nowadays that some of it ends up getting granted out, due to the excess assets policy you mention, as well as our single-donor cap, which prevents one donor from providing too much of our operating support (more here). In 2021, the vast majority of our grants from unrestricted funding went to top charities, and indeed the majority of our grant funding in general goes to top charities—as mentioned in the above post, we think that the ratio this year will be about 3:1 (based on the pipeline of opportunities we’re looking at right now, not on any rules or proclivities).
Though we will use the All Grants Fund to support some opportunities that are higher-expected-value than our top charities, we wouldn’t predict that the All Grants Fund will be systematically higher in expected value than the Maximum Impact Fund. We are using the same cost-effectiveness bar for grants from both funds, and many of our grants to non–top charity programs are similar in cost-effectiveness to—not greatly more cost-effective than—our top charities.
Three, with all of the above said, we agree with your suggestion that the most impactful way to give to GiveWell, assuming you trust our decision-making, is unrestricted. There could be a world in which we get way, way more Maximum Impact Fund donations than All Grants Fund donations, and because we’re compelled to spend the former on top charities, we end up funding still-excellent-but-less-cost-effective opportunities from those charities (say, 8x cash rather than 10x) and have to raise our bar for granting to other programs to, say, 12x because we’re flexible funding–constrained. We don’t think that’s going to happen because in reality, as noted above, a lot of our funding is flexible. But giving to us unrestricted (or restricted to grantmaking only, through the All Grants Fund) means that we can shift funding around as needed such that we’re maximizing the overall impact of our portfolio. The Maximum Impact Fund, however, remains our top recommendation for donors who want to be assured that their donation goes toward high-impact/high-confidence opportunities, versus the riskier options that might be funded via the All Grants Fund.
I hope that’s somewhat helpful!
Best,
Miranda
This seems like a move towards being more internally consistent. The inclusion of givedirectly j the max impact fund was hard to justify when it wa 10x “less effective than the ones included” by their own metrics.
With deworming the story seems a bit more nuanced. The uncertainty is higher and givewell wants to emphasise more ‘sure bets’. I wonder if The HLI evaluation had any impact on this. My takeaway from that evaluation was that the methods used here by GW were somewhat ad hoc; but actually we should update towards higher impact, as the study they strongly discounted had a very positive result.
I also appreciate GiveWell’s discussion of fungibility here. This might clear up some of the past confusion about “does no room for more funding for X mean that donations to X have no marginal impact?”
Hi, David,
Thank you for your comment! To clarify one point from what you wrote: the critique of our deworming analysis from Happier Lives Institute was not a factor in our decision to update our top charity criteria. We had been planning an update of this kind for about a year before Wednesday’s announcement, and only began communicating with HLI about deworming a couple of months ago.
HLI’s engagement has led us to begin considering changes to our cost-effectiveness analysis for deworming (and to how we present the decisions behind our models in general). But Wednesday’s announcement does not represent a change in our analysis of deworming; it is about a change to our criteria for top charities. We expect to continue to recommend funding for cost-effective gaps we find in deworming—we’ll just be recommending it from pots of money other than the Maximum Impact Fund.
I hope that’s helpful!
Best, Miranda
How many people are in this category? I think that standard nonbe some-EA charities would be loathe to take people out of recurring donations because they thing ‘inertia’ is a very important factor. I think this will be less so for EA and adjacent ‘conscious informed’ donors but it still might be a thing. Perhaps there should be some very positive (and easy to understand and justify) ‘default’ option for these people?
Hi, David,
To clarify our process here, which we haven’t detailed except in emails to the donors to whom this applies:
We plan to stop accepting donations to the programs previously on our top charities list by December 31, 2022. In the few months before then, we are contacting donors with open recurring gifts to these programs several times, asking them if they’d like to reallocate their gifts elsewhere. If we don’t get a response from these donors by the December 31 deadline, we are automatically cancelling any portions of recurring donations that are allocated to one of the five former top charities (except those set up via PayPal; see below). If I’m interpreting you correctly, you’re asking why we’re choosing to cancel those donations instead of automatically reallocating them to a different program or fund—is that right?
We made this decision because it was more practical for us administratively, and because we expect that we’ll ultimately end up needing to cancel very few donations, if any. Most donors will be able to choose their reallocation rather than this happening by default. When we discontinued the standout charity designation, the vast majority of donors switched their designation to a different program based on their preference; we think that will most likely happen this time as well, so we don’t expect to end up missing out on large amounts of funding.
The exception to the process above is recurring donations through PayPal—we can’t cancel or change these ourselves, so, after December 31, any funds we get from donations to previous top charities will be automatically reallocated to the Maximum Impact Fund (which seemed like a simple, justifiable default).
I hope that’s helpful, and thanks for your engagement!
Miranda
Thanks very much for the update!
Do GiveWell’s published cost-effectiveness estimates already include an adjustment for the optimizer’s curse? Or is the idea that donors should treat estimates like 35x cash as “raw” expected value calculations, to which they apply their own informal Bayesian adjustment along the lines of Holden’s post?
Hi, Andrew,
Yes, the cost-effectiveness estimates we discuss publicly, including the 35x cash (preliminary!) estimate for the maternal syphilis expansion grant, incorporate all “human adjustments” we make to raw expected value, which often appear as “supplemental adjustments” in our cost-effectiveness analyses. These include factors such as likelihood of leverage and funging; charity-level risks, like wastage or funds being diverted for some other purpose; or intervention-level adjustments, like reduction in nonfatal illness or spillover effects. We don’t explicitly model these factors, but incorporate rough best guesses of their effects, which can shift the final cost-effectiveness estimate.
Best,
Miranda