We have certainly improved monitoring practices since 2016 and it’s important that we continue to look to improve them.
The observations and criticisms made in 2014 were valid and it is one of the benefits of independent organisations reviewing our work in detail that we receive feedback and suggestions that can help us do a better job.
An example of a recent improvement is the change in the frequency and scale of our post-distribution monitoring. For many years, PDMs were 6 monthly and involved visiting 5% of the households that received nets. As a result of an 18 month trial in Uganda, where we carried out PDMs in 124 health sub-districts split into five randomised groups (Arm 1: 6-monthly, 5% of households; Arm 2: 9-monthly, 5% of HHs; Arm 3; 6-monthly, 1.5% of HHs; Arm 4: 9-monthly, 1.5% of HHs; Arm 5: A PDM at 18 months as a control), we generated the data to support a move to 9-monthly PDMs visiting 1.5% of HHs. This has reduced cost without any loss in the benefit of carrying out the PDMs or value of the data generated.
Another example that took place earlier, as a result of feedback from GiveWell, was for AMF itself to make the randomised selections of households to visit rather than leaving this to in-country partners. It is not clear if this changed the outcome and reliability of the PDMs, but the separation of who does the selecting and who does the visiting increased confidence in the results of the PDMs.
The increased use of electronic device data collection is another way in which monitoring is being improved with benefits including: lower cost, improved accuracy, earlier detection of problems and faster access to results.
Improving monitoring practices is a priority and we continually reflect on how we can do better.
We have certainly improved monitoring practices since 2016 and it’s important that we continue to look to improve them.
The observations and criticisms made in 2014 were valid and it is one of the benefits of independent organisations reviewing our work in detail that we receive feedback and suggestions that can help us do a better job.
An example of a recent improvement is the change in the frequency and scale of our post-distribution monitoring. For many years, PDMs were 6 monthly and involved visiting 5% of the households that received nets. As a result of an 18 month trial in Uganda, where we carried out PDMs in 124 health sub-districts split into five randomised groups (Arm 1: 6-monthly, 5% of households; Arm 2: 9-monthly, 5% of HHs; Arm 3; 6-monthly, 1.5% of HHs; Arm 4: 9-monthly, 1.5% of HHs; Arm 5: A PDM at 18 months as a control), we generated the data to support a move to 9-monthly PDMs visiting 1.5% of HHs. This has reduced cost without any loss in the benefit of carrying out the PDMs or value of the data generated.
Another example that took place earlier, as a result of feedback from GiveWell, was for AMF itself to make the randomised selections of households to visit rather than leaving this to in-country partners. It is not clear if this changed the outcome and reliability of the PDMs, but the separation of who does the selecting and who does the visiting increased confidence in the results of the PDMs.
The increased use of electronic device data collection is another way in which monitoring is being improved with benefits including: lower cost, improved accuracy, earlier detection of problems and faster access to results.
Improving monitoring practices is a priority and we continually reflect on how we can do better.