Thanks for these questions Siebe! And I take your point on sharing context; I’ll edit in some points in the main post.
1. We have internally compared these charities on something close to a DALY-equivalent to aid our decisions (similar to what GiveWell does in their cost-effectiveness analyses), but have not included this in the report. This is not because of any assumptions the report makes on empowerment (note that it defines empowerment simply as ‘improving lives’). It’s mainly because of time constraints: we didn’t think it was worth putting in the time to present our estimates in a polished way, given the aims we have with this report (making high-quality recommendations to our members). This is also because internally we are still in the process of developing our views on how to compare across causes and outcome metrics.
2. In terms of cost-effectiveness estimates both do better than GiveDirectly (which we also recommend), and there is obviously large uncertainty in such estimates. Furthermore, Bandhan only accepts donations over $320,000 at this point. Last but not least, the organisations differ in marked ways (where they work, programme focus, target group, type of evidence) and might appeal to different people in our community.
3. We do have rough cost-effectiveness models on almost all of the other charities, but unfortunately I cannot make those public. This is partially for reasons of information sharing (I’d have to check with the charities that provided extra info), but also because these models aren’t as worked out as the ones in the report, and a one-to-one comparison would in many cases be confusing rather than valuable. In fact, most initial cost-effectiveness estimates of the other charities are higher than the final estimates of the recommended charities, and we had to deprioritise them to a large extent because the evidence was weaker. Moreover, we find that as we do a more extensive cost-effectiveness analysis of a charity (as we did for our recommended charities), the numbers often go down rather than up, so it’s likely that our ‘final’ estimates of the other charities would be much lower than the initial, rough estimates we have now.
I’d distinguish between two ways in which a report can ‘be’ cause-neutral:
1. Whether its domain of focus/cause area was chosen purely through cause prioritisation
2. Whether its contents are of value from a cause-neutral perspective
Now I agree that this report is not cause-neutral on (1): it was written at least partially because many of FP’s community members are interested in women’s empowerment.*
However, note that cause prioritisation is just a heuristic to restrict our domain of search: what you want to compare in the end are the (donation) opportunities themselves, not which cause/domain they happen to be in by some categorisation.
Maybe you don’t think women’s empowerment should be the first domain to check when you are looking for the highest-impact charities overall, but you should at least agree that it is valuable from a cause-neutral perspective to know what the best charities within this particular domain of search are. You might then be surprised that they are actually better than you thought, or you might find that your intuition of other areas having better opportunities is confirmed.
As the methodology of this report allows you to compare the charities to those in other areas (we don’t use outcome measures that are restricted to women’s empowerment/the analysis is done in a cause-neutral frame), I think it to be cause-neutral on (2). And I hence think it’s very much worth discussing (from a cause-neutral perspective of course!) its contents on the EA forum, e.g. how do the recommended charities compare to other near-term welfare opportunities, such as those recommended by GiveWell?
Lastly, I don’t think this research provides a post-hoc justification for women’s empowerment: in my view it could have as much provided a justification to not donate in that area (if the best charities turn out to be worse than in other areas) as a justification to donate in that area. At FP we do research into areas not to justify our member’s initial preferences, but to be recommend high-impact opportunities tailored to those preferences (if high-impact opportunities are available), as well as to be able to make a solid, justified argument to focus on other areas (if higher-impact opportunities are available in those other areas).
*This does not mean that the choice of writing this report was a non-cause-neutral choice: for FP to do the most good we obviously need to take our community’s preferences into account. Neither does it mean that one couldn’t arrive at women’s empowerment as a high-potential cause area through cause prioritisation.