Not really I’m afraid. That reasoning seems analogous to the makers of glipizide saying: we know lowering blood sugar in diabetics decreases deaths (we do indeed have data showing that) and their drug lowers blood sugar, so they don’t need to monitor the effect of their drug on deaths. Your model can be faulty, your base statistics can be wrong, you can have unintended consequences. Glipizide does lower blood sugar, but if you take it as a diabetic, you are more likely to die than if you don’t.
It would also be like the Against Malaria Foundation neglecting to measure malaria rates in the areas they work. AMF only distribute nets, but they don’t actually care about (or restrict themselves to monitoring) how many people sleep under bed nets. The bed net distribution and use only matters if it translates to decreased morbidity and mortality from malaria.
If you are sharing information because you want to increase the flow of money to effective charities, and you don’t measure that, then I think you are hobbling yourself from ever demonstrating an impact.
Bernadette, I’m confused. I did say we measured the rate of conversion from the people we draw to the website of charity evaluaters like TLYCS. What I am saying is what we take credit for, and what we can control.
I want to be honest in saying that we can’t take full credit for what people do once they hit the TLYCS website. Taking credit for that would be somewhat disingenuous, as TLYCS has its own marketing materials on the website, and we cannot control that.
So what we focus on measuring and taking credit for is what we can control :-)
Your comment above indicated you had measured it at one time but did not plan to do so on an ongoing basis: “However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate” That approach would not be sensitive to the changing effect size of different methods.
Not really I’m afraid. That reasoning seems analogous to the makers of glipizide saying: we know lowering blood sugar in diabetics decreases deaths (we do indeed have data showing that) and their drug lowers blood sugar, so they don’t need to monitor the effect of their drug on deaths. Your model can be faulty, your base statistics can be wrong, you can have unintended consequences. Glipizide does lower blood sugar, but if you take it as a diabetic, you are more likely to die than if you don’t.
It would also be like the Against Malaria Foundation neglecting to measure malaria rates in the areas they work. AMF only distribute nets, but they don’t actually care about (or restrict themselves to monitoring) how many people sleep under bed nets. The bed net distribution and use only matters if it translates to decreased morbidity and mortality from malaria.
If you are sharing information because you want to increase the flow of money to effective charities, and you don’t measure that, then I think you are hobbling yourself from ever demonstrating an impact.
Bernadette, I’m confused. I did say we measured the rate of conversion from the people we draw to the website of charity evaluaters like TLYCS. What I am saying is what we take credit for, and what we can control.
I want to be honest in saying that we can’t take full credit for what people do once they hit the TLYCS website. Taking credit for that would be somewhat disingenuous, as TLYCS has its own marketing materials on the website, and we cannot control that.
So what we focus on measuring and taking credit for is what we can control :-)
Your comment above indicated you had measured it at one time but did not plan to do so on an ongoing basis: “However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate” That approach would not be sensitive to the changing effect size of different methods.
That’s a good point, I am updating toward measuring it more continuously based on your comments. Thanks!