Please see my updates in the main post and let me know if you still have questions about this. (Do you now understand why we didn’t recommend any other specific GW- or FP-recommended charity in this report, but referred to them as a group?)
As I mentioned in the other comment, I am still not sure why you do not recommend any GW top charities directly. It seems like your report should answer the question “what charities improve women’s health the most?” not the question “what charities that exclusively focus on women’s health are most effective?”. The second one is a much narrower question and its answer will probably not overlap much with the answer to the first question.
You mention them, but only in a single paragraph. It seems that even from the narrow value perspective of “I only care about women’s empowerment” the question of “are women helped more by GiveWell charities or the charities recommended here?” is a really key question that your report should try to answer.
The top of your report also says the following:
We researched charity programmes to find those that most cost-effectively improve the lives of women and girls.
This however does not actually seem to be the question you are answering, as I mentioned above. I expect the best interventions for women’s empowerment to not exclusively focus on doing so (because there are many many more charities trying to improve overall health, because women’s empowerment seems like it would overlap a lot with general health goals, etc). I even expect them to not overlap that much with GiveWell’s recommendations, though that’s a critique on a higher level that I think we can ignore for now.
To be transparent about my criticism here, the feeling that I’ve gotten from this report, is that the goal of the report was not to answer the question of “how can we best achieve the most good for the value of women’s empowerment?” but was instead focusing on the question “what set of charity recommendations will most satisfy our potential donors, by being rigorous and seeming to cover most of the areas we are supposed to check”.
To be clear, I think the vast majority of organizations fall into this space, even in EA, and I have roughly similar (though weaker) criticisms for GiveWell itself, which focuses on global development charities in a pretty unprincipled way that I think has a lot to do with global development being transparent in a way that more speculative interventions are not (though most of the key staff has switched from GiveWell to OpenPhil now, I think in parts because of the problems of that approach that I am criticizing here).
I think focusing on that transparency can sometimes be worth it for an individual organization in the long run by demonstrating good judgement and therefore attracting additional resources (as it did in the case of GiveWell), but generally results in the work not being particularly useful for answering the real question of “how can we do the most good?”.
And on the margin I think that that kind of research is net-harmful for the overall quality of research and discussion on general cause-prioritization by spreading a methodology that is badly suited for answering the much more difficult questions of that domain (similarly to how p-testing has had a negative effect on psychology research, by it being a methodology that is badly suited for the actual complexity of the domain, while still being well-suited to answer questions in a much narrower domain).
I think overall this report is pretty high-quality by the standards of global development research, but a large number of small things (the choice of focus area, limiting yourself to charities exclusively focused on women’s empowerment, the narrow methodological focus, and I guess my priors for orgs working in this space) give me the sense that this report was not primarily written with the goal of answering the question “what interventions will actually improve women’s lives?” but was instead more trying to do a broad thing, a large part of which was to look rigorous and principled, conform to what your potential donors expect from a rigorous report, be broadly defensible, and fit with the skills and methodologies that your current team has (because those are the skills that are prevalent in the global development community).
And I think all of those aims are reasonable aims for the goal of FP, I just think they together make me expect that EAs with a different set of aims will not benefit much from engaging with this research, and because you can’t be fully transparent about those aims (because doing so would confuse your primary audience or be perceived as deceptive), it will inevitably confuse at least some of the people trying to do something that is more aligned with my aims and detract from what I consider key cause-prioritization work.
This overall leaves me in a place where I am happy about this research and FP existing, and think it will cause valuable resources to be allocated towards important projects, but where I don’t really want a lot more of it to show up on the EA Forum. I respect your work and think what you are doing is broadly good (though I obviously always have recommendations for things I would do differently).
This is to thank you (and others) once more for all your comments here, and to let you know they have been useful and we have incorporated some changes to account for them in a new version of the report, which will be published in March or April. They were also useful in our internal discussion on how to frame our research, and we plan to keep improving our communication around this throughout the rest of the year, e.g. by publishing a blog post / brief on cause prioritisation for our members.
I also largely agree with the views you express in your last post above, insofar as they pertain to the contents of this report specifically. However, very importantly, I should stress that your comments do not apply to FP research generally: we generally choose the areas we research through cause prioritisation / in a cause neutral way, and we do try to answer the question ‘how can we achieve the most good’ in the areas we investigate, not (even) shying away from harder-to-measure impact. In fact, we are moving more and more in the latter direction, and are developing research methodology to do so (see e.g. our recently published methodology brief on policy interventions).
Some of our reports so far have been an exception to these rules for pragmatic (though impact-motivated) reasons, mainly:
We quickly needed to build a large enough ‘basic’ portfolio of relatively high-impact charities, so that we could make good recommendations to our members.
There are some causes our members ask lots of questions about / are extra interested in, and we want to be able to say something about those areas, even if we in the end recommend them to focus on other areas instead, when we find better opportunities there.
But there’s definitely ways in which we can improve the framing of these exceptions, and the comments you provided have already been helpful in that way.
As I mentioned in the other comment, I am still not sure why you do not recommend any GW top charities directly. It seems like your report should answer the question “what charities improve women’s health the most?” not the question “what charities that exclusively focus on women’s health are most effective?”. The second one is a much narrower question and its answer will probably not overlap much with the answer to the first question.
You mention them, but only in a single paragraph. It seems that even from the narrow value perspective of “I only care about women’s empowerment” the question of “are women helped more by GiveWell charities or the charities recommended here?” is a really key question that your report should try to answer.
The top of your report also says the following:
This however does not actually seem to be the question you are answering, as I mentioned above. I expect the best interventions for women’s empowerment to not exclusively focus on doing so (because there are many many more charities trying to improve overall health, because women’s empowerment seems like it would overlap a lot with general health goals, etc). I even expect them to not overlap that much with GiveWell’s recommendations, though that’s a critique on a higher level that I think we can ignore for now.
To be transparent about my criticism here, the feeling that I’ve gotten from this report, is that the goal of the report was not to answer the question of “how can we best achieve the most good for the value of women’s empowerment?” but was instead focusing on the question “what set of charity recommendations will most satisfy our potential donors, by being rigorous and seeming to cover most of the areas we are supposed to check”.
To be clear, I think the vast majority of organizations fall into this space, even in EA, and I have roughly similar (though weaker) criticisms for GiveWell itself, which focuses on global development charities in a pretty unprincipled way that I think has a lot to do with global development being transparent in a way that more speculative interventions are not (though most of the key staff has switched from GiveWell to OpenPhil now, I think in parts because of the problems of that approach that I am criticizing here).
I think focusing on that transparency can sometimes be worth it for an individual organization in the long run by demonstrating good judgement and therefore attracting additional resources (as it did in the case of GiveWell), but generally results in the work not being particularly useful for answering the real question of “how can we do the most good?”.
And on the margin I think that that kind of research is net-harmful for the overall quality of research and discussion on general cause-prioritization by spreading a methodology that is badly suited for answering the much more difficult questions of that domain (similarly to how p-testing has had a negative effect on psychology research, by it being a methodology that is badly suited for the actual complexity of the domain, while still being well-suited to answer questions in a much narrower domain).
I think overall this report is pretty high-quality by the standards of global development research, but a large number of small things (the choice of focus area, limiting yourself to charities exclusively focused on women’s empowerment, the narrow methodological focus, and I guess my priors for orgs working in this space) give me the sense that this report was not primarily written with the goal of answering the question “what interventions will actually improve women’s lives?” but was instead more trying to do a broad thing, a large part of which was to look rigorous and principled, conform to what your potential donors expect from a rigorous report, be broadly defensible, and fit with the skills and methodologies that your current team has (because those are the skills that are prevalent in the global development community).
And I think all of those aims are reasonable aims for the goal of FP, I just think they together make me expect that EAs with a different set of aims will not benefit much from engaging with this research, and because you can’t be fully transparent about those aims (because doing so would confuse your primary audience or be perceived as deceptive), it will inevitably confuse at least some of the people trying to do something that is more aligned with my aims and detract from what I consider key cause-prioritization work.
This overall leaves me in a place where I am happy about this research and FP existing, and think it will cause valuable resources to be allocated towards important projects, but where I don’t really want a lot more of it to show up on the EA Forum. I respect your work and think what you are doing is broadly good (though I obviously always have recommendations for things I would do differently).
Hi Habryka,
This is to thank you (and others) once more for all your comments here, and to let you know they have been useful and we have incorporated some changes to account for them in a new version of the report, which will be published in March or April. They were also useful in our internal discussion on how to frame our research, and we plan to keep improving our communication around this throughout the rest of the year, e.g. by publishing a blog post / brief on cause prioritisation for our members.
I also largely agree with the views you express in your last post above, insofar as they pertain to the contents of this report specifically. However, very importantly, I should stress that your comments do not apply to FP research generally: we generally choose the areas we research through cause prioritisation / in a cause neutral way, and we do try to answer the question ‘how can we achieve the most good’ in the areas we investigate, not (even) shying away from harder-to-measure impact. In fact, we are moving more and more in the latter direction, and are developing research methodology to do so (see e.g. our recently published methodology brief on policy interventions).
Some of our reports so far have been an exception to these rules for pragmatic (though impact-motivated) reasons, mainly:
We quickly needed to build a large enough ‘basic’ portfolio of relatively high-impact charities, so that we could make good recommendations to our members.
There are some causes our members ask lots of questions about / are extra interested in, and we want to be able to say something about those areas, even if we in the end recommend them to focus on other areas instead, when we find better opportunities there.
But there’s definitely ways in which we can improve the framing of these exceptions, and the comments you provided have already been helpful in that way.