Thanks for sharing this report, and for all the work that went into this program so far.
Regarding the social desirability bias, and survey problems generally, there may be a few tweaks that would help with the situation.
Social desirability bias in surveys can be significantly reduced by using the “list experiment” technique.
There might we a way to phrase the question so that the social desirability bias goes the other way. For example, instead of asking “did you use the products”, you could ask “do you still have the products?”
If you ask people to keep the packaging after use, then you could ask to see it (and observe if it has been used, or not). This might also help estimate diversion.
Regarding the overlap with ANRiN, have you estimated the prior probability of that happening, given the size of the programs? It makes me wonder if there is a bias in the selection of treatment locations that makes this more likely, and which might also affect results in other ways. For example, maybe both organizations are selecting treatment locations with better transportation infrastructure, in which case the program might prove harder to scale in the future.
This point about altering the questioning is brilliant. Personally (many disagree with me) I think that social desirability and “future hope,” bias are so overwhelming when free stuff has been given out that there’s almost no point in asking someone if they used the ors correctly.
I like the “do you still have the products” as a betterquestion to partly mitigate that or you could take that further and ask “can you show me the ORS”.
I also find the drop in “2 week prevalence” suspicious pre and post, as is the pre prevalence of 38 percent in the slum. This high rate seems implausible, unless there’s a cholera epidemic going on or similar (and even then it’s probably still almost impossible).
I also had a question about what percent of the ORS given was actually used during the 4 week period. Not what percent of diarrhea was treated with ORS, but what percent of what was overall given out was reportedly used. This is an important figure to help you check potential reporting bias as well. I might be missing it but I couldn’t see it there.
Interested to hear your thoughts on this. Thanks for all the amazing work
Hi Nick, thanks for sharing your thoughts and excellent points.
Regarding the urban slum rates, thank you for calling this out! On digging back into it, we realise we unfortunately missed copy-pasting corrected prevalence data into the report when we fixed a code bug for urban slum Baseline (which initially included diarrhoea instances for an additional week for both timeframes). The other wards used a later survey version with separate logic and are not impacted. The pre-post results data in the report were updated after the fix, so no change needed there.
That all said, the corrected urban slum Baseline 2-week and 4-week prevalences of 30.1% and 41.8% (will update original post) are still comparatively high.
Regarding the proportion of all distributed ORS that was used, we asked households how many ORS sachets they used by age-group for the 6-week follow-up period and counted how many sachets they had left, so we do have those extra signals with which to scrutinise their claimed ORS treatment rates. A complicator here is the unknown volume of ORS used in a given treatment; 2 x one litre sachets are provided per co-pack, but one litre can be sufficient depending on the diarrhoea duration and whether the caregiver abides by the instruction to discard prepared ORS after 24 hours. Nonetheless, this is certainly something for us to look into further.
Thanks again for your thoughtful comments and for helping improve our program!
Just to double check (still not completely clear) did you distribute to about 1000 households and about 930 of those used the ORS over a 6 week follow up period? Or did you distribute to more than that, but about 1000 households said their kid had diarrhoea in that time?
A bit confused whether 93% - ish of all families given ORS used it over 6 weeks or something different? Obviously it would be impossible for that higher proportion of the families to have kids with Diarrhoea in only a 6 week period—still trying to get my head around this.
whether 93% - ish of all families given ORS used it over 6 weeks or something different?
Something different :) But I think I see what you’re getting at.
Total distribution was to ~4000 households (families) across the 4 wards. The question on usage by age group of ‘our’ ORS was asked at follow-up, with approx. 2400 HHs surveyed. Of the ORS sachets used by HHs in that sample, the data in the table “% Clear Solutions ORS sachets used, % by age group” expresses who (by age group) used them.
I think you’re asking what proportion of all the ORS provided was used (1) by anyone, and (2) by under-5s. This is a good point, as it gives an idea of what revisit cadence, or potential increase in packets given-per-child, would be needed for continuous coverage.
Said sample of HHs reported 3317 co-packs received (ie. 6634 ORS sachets, 2 per co-pack), with 2046 sachets reported used (with ~80%, varying by ward, used for under-5s). So 2046⁄6634 = 30.8% of the ORS sachets distributed were reported used within the 6-week follow-up period. We’ll prep this also for addition to the report—thank you!
Thanks that makes sense. I was actually trying to ask what proportion of the households reported using ORS, not what percent of sachets were used. I think I get most of it it now nice one, still one thing I’m not clear on is...
Of the 2400 Households surveyed, how many of them reported using any ORS at all after 6 weeks? That’s a crucial number for me both as a sanity check and uptake check. I’m not sure if that’s in this report here or not
Just my 2 cents, but I think its helpful to start a report with the really basic design stuff, ie We gave out xxxx ORS sachets to XXXX Families while doing XXXXX education, then followed them up after XXXX weeks—I struggled a little bit to follow the process here. Not a big deal
That’s good feedback, thanks: we’ve perhaps leapt too directly from the conceptual description to the results, without properly quantifying the basic operation. Noted for improvement.
To (finally, I hope!) answer your question, of the ~2400 (2381) households surveyed—actually 2163 once we filter-out the non-consents:
1518 (70.1%) reported any occupant using ORS from any source; and
924 (42.7%) reported any occupant using ORS from our distribution.
in the 6 weeks of our follow-up period.
This is “all wards” data, so may skew somewhat according to exact response numbers per ward. Please take these numbers as provisional / subject-to-error in the name of a timely response.
We have not looked at them in more depth yet, but I see the value in this perspective and we’ll think more about what we might learn from them. I’m also interested in your take on what we might infer from these, Nick (and others).
Thank you for your suggestions on the social desirability front. Do you have specific resources you could suggest regarding this? We have tried incorporating some of your points such as packet counting in the pilot but are always looking for other methods like you listed.
We unfortunately did not have an in-depth view of the ANRiN program during our pilot implementation. However, we have since gotten in touch and are aiming to understand some of your questions retrospectively if possible. Going forward, it is likely that the overlap will not be of major concern as this is the final year of funding for ANRiN. However, we will aim to monitor this status as well.
Regardless, your point on the treatment location bias is an important consideration that we have not paid as much attention to at the pilot stage. Instead, we focused more on quickly learning the operational feasibility and were not as meticulous in treatment location selection beyond having diverse rurality representation. We will certainly pay more attention to this as we plan for the next phase.
I am a dilettante and don’t have much further to offer on social desirability bias, unfortunately. You might try connecting with a social scientist, development economist, or staff at one of the EA or EA-adjacent global health and development charities operating at the frontier of evidence for their respective interventions, such as GiveWell, GiveDirectly, Living Goods, IDinsight, DMI, Evidence Action, etc.
Thanks for sharing this report, and for all the work that went into this program so far.
Regarding the social desirability bias, and survey problems generally, there may be a few tweaks that would help with the situation.
Social desirability bias in surveys can be significantly reduced by using the “list experiment” technique.
There might we a way to phrase the question so that the social desirability bias goes the other way. For example, instead of asking “did you use the products”, you could ask “do you still have the products?”
If you ask people to keep the packaging after use, then you could ask to see it (and observe if it has been used, or not). This might also help estimate diversion.
Regarding the overlap with ANRiN, have you estimated the prior probability of that happening, given the size of the programs? It makes me wonder if there is a bias in the selection of treatment locations that makes this more likely, and which might also affect results in other ways. For example, maybe both organizations are selecting treatment locations with better transportation infrastructure, in which case the program might prove harder to scale in the future.
This point about altering the questioning is brilliant. Personally (many disagree with me) I think that social desirability and “future hope,” bias are so overwhelming when free stuff has been given out that there’s almost no point in asking someone if they used the ors correctly.
I like the “do you still have the products” as a betterquestion to partly mitigate that or you could take that further and ask “can you show me the ORS”.
I also find the drop in “2 week prevalence” suspicious pre and post, as is the pre prevalence of 38 percent in the slum. This high rate seems implausible, unless there’s a cholera epidemic going on or similar (and even then it’s probably still almost impossible).
I also had a question about what percent of the ORS given was actually used during the 4 week period. Not what percent of diarrhea was treated with ORS, but what percent of what was overall given out was reportedly used. This is an important figure to help you check potential reporting bias as well. I might be missing it but I couldn’t see it there.
Interested to hear your thoughts on this. Thanks for all the amazing work
Hi Nick, thanks for sharing your thoughts and excellent points.
Regarding the urban slum rates, thank you for calling this out! On digging back into it, we realise we unfortunately missed copy-pasting corrected prevalence data into the report when we fixed a code bug for urban slum Baseline (which initially included diarrhoea instances for an additional week for both timeframes). The other wards used a later survey version with separate logic and are not impacted. The pre-post results data in the report were updated after the fix, so no change needed there.
That all said, the corrected urban slum Baseline 2-week and 4-week prevalences of 30.1% and 41.8% (will update original post) are still comparatively high.
Regarding the proportion of all distributed ORS that was used, we asked households how many ORS sachets they used by age-group for the 6-week follow-up period and counted how many sachets they had left, so we do have those extra signals with which to scrutinise their claimed ORS treatment rates. A complicator here is the unknown volume of ORS used in a given treatment; 2 x one litre sachets are provided per co-pack, but one litre can be sufficient depending on the diarrhoea duration and whether the caregiver abides by the instruction to discard prepared ORS after 24 hours. Nonetheless, this is certainly something for us to look into further.
Thanks again for your thoughtful comments and for helping improve our program!
Nice one great reply!
Just to double check (still not completely clear) did you distribute to about 1000 households and about 930 of those used the ORS over a 6 week follow up period? Or did you distribute to more than that, but about 1000 households said their kid had diarrhoea in that time?
A bit confused whether 93% - ish of all families given ORS used it over 6 weeks or something different? Obviously it would be impossible for that higher proportion of the families to have kids with Diarrhoea in only a 6 week period—still trying to get my head around this.
Nick.
Something different :) But I think I see what you’re getting at.
Total distribution was to ~4000 households (families) across the 4 wards. The question on usage by age group of ‘our’ ORS was asked at follow-up, with approx. 2400 HHs surveyed. Of the ORS sachets used by HHs in that sample, the data in the table “% Clear Solutions ORS sachets used, % by age group” expresses who (by age group) used them.
I think you’re asking what proportion of all the ORS provided was used (1) by anyone, and (2) by under-5s. This is a good point, as it gives an idea of what revisit cadence, or potential increase in packets given-per-child, would be needed for continuous coverage.
Said sample of HHs reported 3317 co-packs received (ie. 6634 ORS sachets, 2 per co-pack), with 2046 sachets reported used (with ~80%, varying by ward, used for under-5s). So 2046⁄6634 = 30.8% of the ORS sachets distributed were reported used within the 6-week follow-up period. We’ll prep this also for addition to the report—thank you!
Thanks that makes sense. I was actually trying to ask what proportion of the households reported using ORS, not what percent of sachets were used. I think I get most of it it now nice one, still one thing I’m not clear on is...
Of the 2400 Households surveyed, how many of them reported using any ORS at all after 6 weeks? That’s a crucial number for me both as a sanity check and uptake check. I’m not sure if that’s in this report here or not
Just my 2 cents, but I think its helpful to start a report with the really basic design stuff, ie We gave out xxxx ORS sachets to XXXX Families while doing XXXXX education, then followed them up after XXXX weeks—I struggled a little bit to follow the process here. Not a big deal
That’s good feedback, thanks: we’ve perhaps leapt too directly from the conceptual description to the results, without properly quantifying the basic operation. Noted for improvement.
To (finally, I hope!) answer your question, of the ~2400 (2381) households surveyed—actually 2163 once we filter-out the non-consents:
1518 (70.1%) reported any occupant using ORS from any source; and
924 (42.7%) reported any occupant using ORS from our distribution.
in the 6 weeks of our follow-up period.
This is “all wards” data, so may skew somewhat according to exact response numbers per ward. Please take these numbers as provisional / subject-to-error in the name of a timely response.
We have not looked at them in more depth yet, but I see the value in this perspective and we’ll think more about what we might learn from them. I’m also interested in your take on what we might infer from these, Nick (and others).
Hi Ian, thank you for your comment!
Thank you for your suggestions on the social desirability front. Do you have specific resources you could suggest regarding this? We have tried incorporating some of your points such as packet counting in the pilot but are always looking for other methods like you listed.
We unfortunately did not have an in-depth view of the ANRiN program during our pilot implementation. However, we have since gotten in touch and are aiming to understand some of your questions retrospectively if possible. Going forward, it is likely that the overlap will not be of major concern as this is the final year of funding for ANRiN. However, we will aim to monitor this status as well.
Regardless, your point on the treatment location bias is an important consideration that we have not paid as much attention to at the pilot stage. Instead, we focused more on quickly learning the operational feasibility and were not as meticulous in treatment location selection beyond having diverse rurality representation. We will certainly pay more attention to this as we plan for the next phase.
Hi Charlie, thanks for your reply.
I am a dilettante and don’t have much further to offer on social desirability bias, unfortunately. You might try connecting with a social scientist, development economist, or staff at one of the EA or EA-adjacent global health and development charities operating at the frontier of evidence for their respective interventions, such as GiveWell, GiveDirectly, Living Goods, IDinsight, DMI, Evidence Action, etc.