This seems like a move towards being more internally consistent. The inclusion of givedirectly j the max impact fund was hard to justify when it wa 10x “less effective than the ones included” by their own metrics.
With deworming the story seems a bit more nuanced. The uncertainty is higher and givewell wants to emphasise more ‘sure bets’. I wonder if The HLI evaluation had any impact on this. My takeaway from that evaluation was that the methods used here by GW were somewhat ad hoc; but actually we should update towards higher impact, as the study they strongly discounted had a very positive result.
I also appreciate GiveWell’s discussion of fungibility here. This might clear up some of the past confusion about “does no room for more funding for X mean that donations to X have no marginal impact?”
Thank you for your comment! To clarify one point from what you wrote: the critique of our deworming analysis from Happier Lives Institute was not a factor in our decision to update our top charity criteria. We had been planning an update of this kind for about a year before Wednesday’s announcement, and only began communicating with HLI about deworming a couple of months ago.
HLI’s engagement has led us to begin considering changes to our cost-effectiveness analysis for deworming (and to how we present the decisions behind our models in general). But Wednesday’s announcement does not represent a change in our analysis of deworming; it is about a change to our criteria for top charities. We expect to continue to recommend funding for cost-effective gaps we find in deworming—we’ll just be recommending it from pots of money other than the Maximum Impact Fund.
This seems like a move towards being more internally consistent. The inclusion of givedirectly j the max impact fund was hard to justify when it wa 10x “less effective than the ones included” by their own metrics.
With deworming the story seems a bit more nuanced. The uncertainty is higher and givewell wants to emphasise more ‘sure bets’. I wonder if The HLI evaluation had any impact on this. My takeaway from that evaluation was that the methods used here by GW were somewhat ad hoc; but actually we should update towards higher impact, as the study they strongly discounted had a very positive result.
I also appreciate GiveWell’s discussion of fungibility here. This might clear up some of the past confusion about “does no room for more funding for X mean that donations to X have no marginal impact?”
Hi, David,
Thank you for your comment! To clarify one point from what you wrote: the critique of our deworming analysis from Happier Lives Institute was not a factor in our decision to update our top charity criteria. We had been planning an update of this kind for about a year before Wednesday’s announcement, and only began communicating with HLI about deworming a couple of months ago.
HLI’s engagement has led us to begin considering changes to our cost-effectiveness analysis for deworming (and to how we present the decisions behind our models in general). But Wednesday’s announcement does not represent a change in our analysis of deworming; it is about a change to our criteria for top charities. We expect to continue to recommend funding for cost-effective gaps we find in deworming—we’ll just be recommending it from pots of money other than the Maximum Impact Fund.
I hope that’s helpful!
Best, Miranda