Rob, thank you so much for the work you and AMF are doing!
GiveWell has written here saying they think your monitoring practice could be improved, though they “continue to believe that AMF stands out among bed net organizations, and among charities generally, for its transparency and the quality of its program monitoring.”
I’d first like to applaud that you do have much better transparency and monitoring practices than the typical development NGO. It seems that one reason GiveWell selected AMF rather than other bed net charities as a top charity is due to this (I could be wrong).
However, given their comment, do you feel it is important for AMF to improve its monitoring practices? Or is that not a priority now? Also the post is from 2016 and may be outdated.
(I can understand how it’s difficult to invest more in monitoring given you have so few staff, and work with international partners on the ground and have less control over the process.)
I work at IDinsight, and am always curious how NGOs decide to spend more or less effort on monitoring. On the one hand it’s really important for improving operations and understanding your own impact, but on the other hand it does compete for resources with your core implementation work.
We have certainly improved monitoring practices since 2016 and it’s important that we continue to look to improve them.
The observations and criticisms made in 2014 were valid and it is one of the benefits of independent organisations reviewing our work in detail that we receive feedback and suggestions that can help us do a better job.
An example of a recent improvement is the change in the frequency and scale of our post-distribution monitoring. For many years, PDMs were 6 monthly and involved visiting 5% of the households that received nets. As a result of an 18 month trial in Uganda, where we carried out PDMs in 124 health sub-districts split into five randomised groups (Arm 1: 6-monthly, 5% of households; Arm 2: 9-monthly, 5% of HHs; Arm 3; 6-monthly, 1.5% of HHs; Arm 4: 9-monthly, 1.5% of HHs; Arm 5: A PDM at 18 months as a control), we generated the data to support a move to 9-monthly PDMs visiting 1.5% of HHs. This has reduced cost without any loss in the benefit of carrying out the PDMs or value of the data generated.
Another example that took place earlier, as a result of feedback from GiveWell, was for AMF itself to make the randomised selections of households to visit rather than leaving this to in-country partners. It is not clear if this changed the outcome and reliability of the PDMs, but the separation of who does the selecting and who does the visiting increased confidence in the results of the PDMs.
The increased use of electronic device data collection is another way in which monitoring is being improved with benefits including: lower cost, improved accuracy, earlier detection of problems and faster access to results.
Improving monitoring practices is a priority and we continually reflect on how we can do better.
Rob, thank you so much for the work you and AMF are doing!
GiveWell has written here saying they think your monitoring practice could be improved, though they “continue to believe that AMF stands out among bed net organizations, and among charities generally, for its transparency and the quality of its program monitoring.”
I’d first like to applaud that you do have much better transparency and monitoring practices than the typical development NGO. It seems that one reason GiveWell selected AMF rather than other bed net charities as a top charity is due to this (I could be wrong).
However, given their comment, do you feel it is important for AMF to improve its monitoring practices? Or is that not a priority now? Also the post is from 2016 and may be outdated.
(I can understand how it’s difficult to invest more in monitoring given you have so few staff, and work with international partners on the ground and have less control over the process.)
I work at IDinsight, and am always curious how NGOs decide to spend more or less effort on monitoring. On the one hand it’s really important for improving operations and understanding your own impact, but on the other hand it does compete for resources with your core implementation work.
We have certainly improved monitoring practices since 2016 and it’s important that we continue to look to improve them.
The observations and criticisms made in 2014 were valid and it is one of the benefits of independent organisations reviewing our work in detail that we receive feedback and suggestions that can help us do a better job.
An example of a recent improvement is the change in the frequency and scale of our post-distribution monitoring. For many years, PDMs were 6 monthly and involved visiting 5% of the households that received nets. As a result of an 18 month trial in Uganda, where we carried out PDMs in 124 health sub-districts split into five randomised groups (Arm 1: 6-monthly, 5% of households; Arm 2: 9-monthly, 5% of HHs; Arm 3; 6-monthly, 1.5% of HHs; Arm 4: 9-monthly, 1.5% of HHs; Arm 5: A PDM at 18 months as a control), we generated the data to support a move to 9-monthly PDMs visiting 1.5% of HHs. This has reduced cost without any loss in the benefit of carrying out the PDMs or value of the data generated.
Another example that took place earlier, as a result of feedback from GiveWell, was for AMF itself to make the randomised selections of households to visit rather than leaving this to in-country partners. It is not clear if this changed the outcome and reliability of the PDMs, but the separation of who does the selecting and who does the visiting increased confidence in the results of the PDMs.
The increased use of electronic device data collection is another way in which monitoring is being improved with benefits including: lower cost, improved accuracy, earlier detection of problems and faster access to results.
Improving monitoring practices is a priority and we continually reflect on how we can do better.