Thank you for your great feedback and suggestions! (and sorry for not responding sooner)
I guess that one’s meaning for a “major” or “moderate” limitation is, in the end, contingent on their aspirations. If we had the standards of an organization like GiveWell, this would most certainly be a very big limitation. But quite early on we understood that we did not have the data to be able to support as strong conclusions about cost-effectiveness as GiveWell’s recommendations. Rather, our approach was: let’s do the best we can with the data we have at hand, and simply make sure that we are very clear and transparent about the limitations of our analysis. The biggest limitation of this analysis is the lack of experimental data (with only observational data available). We wanted to make sure this got the most eye-catching label. In the end we believe that what’s important is that readers of the report (or just of the executive summary) get a good sense of what conclusions are justified given our analysis and which aren’t, and that they understand what the important limitations of the analysis are. We totally agree with your arguments and the fact that past cost-effectiveness is by no means proof of future cost-effectiveness given more funding (though we do think there are reasons for cautious optimism in the case of Animals Now).
Also, thank you for the interesting suggestion for an RCT study design. This is something we have been considering in general, but haven’t thought of your exact idea. However, to approach anything like that that, we would first need the charity to have a strong motivation to get into that adventure.
Thank you for your great feedback and suggestions! (and sorry for not responding sooner)
I guess that one’s meaning for a “major” or “moderate” limitation is, in the end, contingent on their aspirations. If we had the standards of an organization like GiveWell, this would most certainly be a very big limitation. But quite early on we understood that we did not have the data to be able to support as strong conclusions about cost-effectiveness as GiveWell’s recommendations. Rather, our approach was: let’s do the best we can with the data we have at hand, and simply make sure that we are very clear and transparent about the limitations of our analysis. The biggest limitation of this analysis is the lack of experimental data (with only observational data available). We wanted to make sure this got the most eye-catching label. In the end we believe that what’s important is that readers of the report (or just of the executive summary) get a good sense of what conclusions are justified given our analysis and which aren’t, and that they understand what the important limitations of the analysis are. We totally agree with your arguments and the fact that past cost-effectiveness is by no means proof of future cost-effectiveness given more funding (though we do think there are reasons for cautious optimism in the case of Animals Now).
Also, thank you for the interesting suggestion for an RCT study design. This is something we have been considering in general, but haven’t thought of your exact idea. However, to approach anything like that that, we would first need the charity to have a strong motivation to get into that adventure.