This is to thank you (and others) once more for all your comments here, and to let you know they have been useful and we have incorporated some changes to account for them in a new version of the report, which will be published in March or April. They were also useful in our internal discussion on how to frame our research, and we plan to keep improving our communication around this throughout the rest of the year, e.g. by publishing a blog post /​ brief on cause prioritisation for our members.
I also largely agree with the views you express in your last post above, insofar as they pertain to the contents of this report specifically. However, very importantly, I should stress that your comments do not apply to FP research generally: we generally choose the areas we research through cause prioritisation /​ in a cause neutral way, and we do try to answer the question ‘how can we achieve the most good’ in the areas we investigate, not (even) shying away from harder-to-measure impact. In fact, we are moving more and more in the latter direction, and are developing research methodology to do so (see e.g. our recently published methodology brief on policy interventions).
Some of our reports so far have been an exception to these rules for pragmatic (though impact-motivated) reasons, mainly:
We quickly needed to build a large enough ‘basic’ portfolio of relatively high-impact charities, so that we could make good recommendations to our members.
There are some causes our members ask lots of questions about /​ are extra interested in, and we want to be able to say something about those areas, even if we in the end recommend them to focus on other areas instead, when we find better opportunities there.
But there’s definitely ways in which we can improve the framing of these exceptions, and the comments you provided have already been helpful in that way.
Hi Habryka,
This is to thank you (and others) once more for all your comments here, and to let you know they have been useful and we have incorporated some changes to account for them in a new version of the report, which will be published in March or April. They were also useful in our internal discussion on how to frame our research, and we plan to keep improving our communication around this throughout the rest of the year, e.g. by publishing a blog post /​ brief on cause prioritisation for our members.
I also largely agree with the views you express in your last post above, insofar as they pertain to the contents of this report specifically. However, very importantly, I should stress that your comments do not apply to FP research generally: we generally choose the areas we research through cause prioritisation /​ in a cause neutral way, and we do try to answer the question ‘how can we achieve the most good’ in the areas we investigate, not (even) shying away from harder-to-measure impact. In fact, we are moving more and more in the latter direction, and are developing research methodology to do so (see e.g. our recently published methodology brief on policy interventions).
Some of our reports so far have been an exception to these rules for pragmatic (though impact-motivated) reasons, mainly:
We quickly needed to build a large enough ‘basic’ portfolio of relatively high-impact charities, so that we could make good recommendations to our members.
There are some causes our members ask lots of questions about /​ are extra interested in, and we want to be able to say something about those areas, even if we in the end recommend them to focus on other areas instead, when we find better opportunities there.
But there’s definitely ways in which we can improve the framing of these exceptions, and the comments you provided have already been helpful in that way.