ME RDQA Guidelines en
ME RDQA Guidelines en
ME RDQA Guidelines en
Acknowledgments
This tool was developed with input from a number of people from various organizations. Those most
directly involved in development of the tool included Ronald Tran Ba Huy of The Global Fund to Fight
AIDS, Tuberculosis and Malaria, David Boone, Karen Hardee, and Silvia Alayon from MEASURE
Evaluation, Cyril Pervilhac and Yves Souteyrand from the World Health Organization, and Annie La Tour
from the Office of the Global AIDS Coordinator.
This tool benefited from pilot tests of the Data Quality Audit (DQA) Tool in Tanzania, Rwanda, Vietnam,
and Madagascar, as well as from feedback from a number of participants in workshops and meetings
held in Nigeria, Ghana, Senegal, South Africa, Vietnam, Switzerland and the U.S.
David Boone
MEASURE Evaluation/JSI
DBoone@jsi.com
Ronald Tran Ba Huy
The Global Fund
Ronald.Tranbahuy@Theglobalfund.org
Cyril Pervilhac
WHO
pervilhacc@who.int
Contents
12345678910 -
BACKGROUND ................................................................................................................ 1
CONCEPTUAL FRAMEWORK ...................................................................................... 2
OBJECTIVES .................................................................................................................... 2
USES .................................................................................................................................. 3
METHODOLOGY ............................................................................................................. 3
OUTPUTS .......................................................................................................................... 7
IMPLEMENTATION STEPS FOR THE RDQA ............................................................ 10
ETHICAL CONSIDERATIONS ..................................................................................... 13
Annex 1 The Link Between the Reporting System and Data Quality........................... 14
Annex 2: Instructions for sampling sites using 2-stage cluster sampling ....................... 20
1-
BACKGROUND
National Programs and donor-funded projects are working towards achieving ambitious goals
related to the fight against diseases such as Acquired Immunodeficiency Syndrome (AIDS),
Tuberculosis (TB) and Malaria. Measuring the success and improving the management of these
initiatives is predicated on strong Monitoring and Evaluation (M&E) systems that produce quality
data related to program implementation.
In the spirit of the Three Ones, the Stop TB Strategy and the RBM Global Strategic Plan, a
number of multilateral and bilateral organizations have collaborated to jointly develop a Data Quality
Assessment (DQA) Tool. The objective of this harmonized initiative is to provide a common
approach for assessing and improving overall data quality. A single tool helps to ensure that
standards are harmonized and allows for joint implementation between partners and with National
Programs.
The DQA Tool focuses exclusively on (1) verifying the quality of reported data, and (2) assessing
the underlying data management and reporting systems for standard program-level output
indicators. The DQA Tool is not intended to assess the entire M&E system of a countrys response
to HIV/AIDS, Tuberculosis or Malaria. In the context of HIV/AIDS, the DQA relates to component 10
(i.e. Supportive supervision and data auditing) of the Organizing Framework for a Functional
National HIV M&E System.
Two versions of the Data Quality Assessment Tool have been developed: (1) The Data Quality
Audit (DQA) Tool which provides guidelines to be used by an external audit team to assess a
Program/projects ability to report quality data; and (2) The Routine Data Quality Assessment
(RDQA) Tool which is a simplified version of the DQA and allows Programs and projects to rapidly
assess the quality of their data and strengthen their data management and reporting systems.
RDQA
Self-assessment by program
2-
CONCEPTUAL FRAMEWORK
The conceptual framework for the DQA and RDQA is illustrated in the Figure 1 (below). Generally,
the quality of reported data is dependent on the underlying data management and reporting
systems; stronger systems should produce better quality data. In other words, for good quality data
to be produced by and flow through a data-management system, key functional components need
to be in place at all levels of the system - the points of service delivery, the intermediate level(s)
where the data are aggregated (e.g. districts, regions) and the M&E unit at the highest level to
which data are reported. The DQA and RDQA tools are therefore designed to (1) verify the quality
of the data, (2) assess the system that produces that data, and (3) develop action plans to improve
both.
Figure 1. Conceptual Framework for the RDQA: Data Management and Reporting
Systems, Functional Areas and Data Quality
QUALITY DATA
3-
M&E Unit
Intermediate Aggregation
Levels (e.g. Districts, Regions)
Service Points
Data-Management and
Reporting System
REPORTING LEVELS
II
Training
III
IV
Indicator Definitions
VI
VII
VIII
OBJECTIVES
VERIFY rapidly 1) the quality of reported data for key indicators at selected sites; and 2) the
ability of data-management systems to collect, manage and report quality data.
DEVELOP an action plan to implement corrective measures for strengthening the data
management and reporting system and improving data quality.
MONITOR over time capacity improvements and performance of the data management and
reporting system to produce quality data (notably through repeat use of the RDQA).
4-
USES
The RDQA is designed to be flexible in use and can serve multiple purposes. Some potential uses
of the tool are listed below, though it is most effective when used routinely.
Routine data quality checks as part of on-going supervision: For example, routine data quality
checks can be included in already planned supervision visits at the service delivery sites.
Initial and follow-up assessments of data management and reporting systems: For example,
repeated assessments (e.g., biannually or annually) of a systems ability to collect and report
quality data at all levels can be used to identify gaps and monitor necessary improvements.
Strengthening program staffs capacity in data management and reporting: For example, M&E
staff can be trained on the RDQA and be sensitized to the need to strengthen the key functional
areas linked to data management and reporting in order to produce quality data.
Preparation for a formal data quality audit: The RDQA tool can help identify data quality issues
and areas of weakness in the data management and reporting system that would need to be
strengthened to increase readiness for a formal data quality audit.
External assessment of the quality of data: Such use of the RDQA for external assessments by
donors or other stakeholders could be more frequent, more streamlined and less resource
intensive than comprehensive data quality audits that use the DQA version for auditing.
The potential users of the RDQA include program managers, supervisors and M&E staff at National
and sub-national levels, as well as donors and other stakeholders.
5-
METHODOLOGY
The RDQA Tool is comprised of two components: (1) verification of reported data for key indicators
at selected sites; and (2) assessment of data management and reporting systems.
Accordingly, the questionnaires in the RDQA contain two parts for data collection:
Part 1 - Data Verifications: quantitative comparison of recounted to reported data and review of
timeliness, completeness and availability of reports;
Part 2 - Systems Assessment: qualitative assessment of the relative strengths and weaknesses
of functional areas of the data management and reporting system.
The RDQA Tool contains three groups of data collection sheets; these include sheets to be
completed (1) at service delivery points, (2) at intermediate aggregation sites (e.g. district or
regional offices), and (3) at the M&E Unit.
Part 1 - Verification of Reported Data for Key Indicators:
The purpose of Part 1 of the RDQA is to assess, on a limited scale, if service delivery and
intermediate aggregation sites are collecting and reporting data to measure the audited indicator(s)
accurately and on time and to cross-check the reported results with other data sources. To do
this, the RDQA will determine if a sample of Service Delivery Sites have accurately recorded the
activity related to the selected indicator(s) on source documents. It will then trace that data to see if
it has been correctly aggregated and/or otherwise manipulated as it is submitted from the initial
Service Delivery Sites through intermediary levels to the program/project M&E Unit.
Data Verifications at Service Delivery Points:
At the Service Delivery Points, the data verification part of the RQDA Excel protocol (Part 1) has
three sub-components (as shown in Figure 2):
1. Reviewing Source Documentation: Review availability and completeness of all indicator source
documents for the selected reporting period.
2. Verifying Reported Results: Recount the reported numbers from available source documents,
compare the verified counts to the site reported number; and identify reasons for differences.
3. Cross-checking Reported Results with other Data Sources: Perform cross-checks of the verified
report totals with other data-sources (e.g. inventory records, laboratory reports, registers, etc.).
(Yescompletely,
partly, nonot at all)
Reviewer
Comments
2
If no, determine how this might have affected reported numbers.
Review the dates on the source documents. Do all dates fall within the
reporting period?
If no, determine how this might have affected reported numbers.
Copy the number of people, cases or events reported by the site during
the reporting period from the site summary report. [B]
What are the reasons for the discrepancy (if any) observed (i.e., data
entry errors, arithmetic errors, missing source documents, other)?
10
What are the reasons for the discrepancy (if any) observed?
Figure 3 (Annex 2) lists the questions asked for the systems assessment, the levels of the reporting
system to which the questions pertain, and the components of data quality addressed by each
question. (See Annex 1 for more detail on the link between a data management and reporting
system and the components of data quality).
It is recommended to apply the system assessment questionnaire in a participatory manner with all
relevant M&E staff present and discussing the answers thoroughly. As questions are answered,
detailed notes should be taken to ensure a comprehensive understanding of the responses. This is
also necessary so that follow-up visits can ensure the correct improvements have been made.
Parts 1 (data verifications) and 2 (systems assessment) of the RDQA Tool can be implemented at
any or all levels of the data management and reporting system: M&E Unit; Intermediate Aggregation
Sites; and/or Service Delivery Points. While it is recommended that both parts of the RDQA be
used to fully assess data quality, especially the first time it is being implemented; depending on the
assessment objectives, one or both of these protocols can be applied and adapted to local contexts.
It is however recommended that the data verification part of the tool be conducted more frequently
in order to monitor and guarantee the quality of reported data. The system assessment protocol
could be applied less often (e.g., biannually or annually).
6-
OUTPUTS
The site-specific dashboards display two graphs for each site visited:
-
The spider-graph on the left displays qualitative data generated from the assessment of the data
management and reporting system and can be used to prioritize areas for strengthening
The bar-chart on the right shows the quantitative data generated from the data verifications;
these can be used to plan and set targets for data quality improvement.
Decisions on where to invest resources for system strengthening should be based on the relative
strengths and weaknesses of the different functional areas of the reporting system identified via the
RDQA, as well as consideration of practicality and feasibility.
A global summary dashboard is also produced to show the aggregate results from the (1) data
verification, and (2) data management system assessment. In addition, a dashboard is produced
to show findings from the systems assessment by the components of data quality. The Global
dashboards are shown in Figure 5.
Figure 5 Global Dashboards [When using the MS Excel file]
80%
75%
78%
78%
Training
70%
1
0
67%
65%
65%
Indicator Definitions
60%
55%
% Available
% On Time
% Complete
Verification Factor
14
Reliability
18
29
16
Yes - Completely
Timeliness
Completeness
18
Precision
Confidentiality
31
12
11
18
11
21
13
Integrity
Partly
No - not at all
N/A
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Person Responsible
Time Line
1
2
3
4
The final output of the RDQA is an action plan for improving data quality which describes the
identified strengthening measures, the staff responsible, the timeline for completion, resources
required and follow-up. The template for the action plan is shown in Figure 7.
Person
Responsible
Time Line
Resources
7-
10
based on a number of criteria, including an analysis of the funding levels to various program
areas (e.g., ARV, PMTCT, ITN, DOTS, Behavior Change Communication [BCC]) and the results
reported for the related indicator(s). In addition, the deciding factor could also be program areas
of concern (e.g., community-based programs that may be more difficult to monitor than facilitybased programs). It is also important to clearly identify the reporting period associated with the
indicator(s) to be verified. Using a specified reporting period gives a reference from which to
compare the recounted data. Ideally, the time period should correspond to the most recent
reporting period. If the circumstances warrant, the time period for the assessment could be less
(e.g., a fraction of the reporting period, such as the last quarter or month of the reporting period).
For example, the number of source documents in a busy VCT site could be voluminous,
assessment staff resources may be limited and the Service Delivery Points might produce
monthly or quarterly reports. In other cases, the time period could correspond to an earlier
reporting period where large results were reported by the program/project(s).
3. Describe the data-collection and reporting system related to the indicator(s). Before selecting
the sites and planning for the field visits, it is crucial for the assessment team to understand (1)
what is the source document for the selected indicator(s); (2) how service delivery is recorded
on that source document; and (3) how the data is reported from the Service Delivery Points up
to the central M&E Unit. It is particularly important to identify the correct source document as this
is the documentation that will be reviewed and recounted to calculate data reporting accuracy.
For the purpose of the recounting exercise, there can only be one source document; this is
generally the first document where the service delivery is recorded (e.g. patient treatment card,
client intake form, etc.). It is also important for the assessment team to determine whether data
from Service Delivery Points is aggregated at intermediary levels (e.g. district or regional offices)
before being reported to the M&E Unit. In some cases, the data flow will include more than one
intermediate level (e.g., regions, provinces or states or multiple levels of program organizations).
In other cases, Service Delivery Points may report directly to the central M&E Unit, without
passing through Intermediate Aggregation Levels. It is recommended that the assessment team
visit all levels at which data is aggregated.
Illustration of Program Areas, Indicators, Data Sources and Source Documents
For each program area, a number of indicators are measured through various data sources. For
example, for tuberculosis in the program area Treatment, the international community has agreed
to the harmonized indicator: Number of new smear positive TB cases that successfully complete
treatment. The data source for this indicator is generally facility-based and the source document
is the patient treatment card. As another example related to AIDS, under the U.S. Presidents
Initiative for AIDS Relief (PEPFAR), a program area is Orphans and Vulnerable Children and an
indicator is: Number of OVC served by OVC programs (disaggregated by male, female, primary
direct and supplemental direct). The data source for this indicator will be at community-based
organizations that serve OVC and the source document will be community-based registers.
Discussions with M&E staff will help identify the source document to use for the data verifications.
4. Select sites to be included. It is not necessary to visit all the reporting sites in a given
Program/project to determine the quality of the data. Random sampling techniques can be
utilized to select a representative group of sites whose data quality is indicative of data quality
for the whole program. Depending on the volume of service (e.g. number of people treated with
ART) and the number of service delivery points, as few as a dozen sites can be assessed to
obtain a reasonable estimate of data quality for the Program/project. Please see Annex 2 for
instructions on how to sample sites using 2-stage cluster sampling. Precise measures of data
11
accuracy are difficult to obtain for an entire Program/project using these methods. Reasonable
estimates of data accuracy are generally sufficient for the purposes of strengthening data
quality, capacity building or preparing for external auditing. For a more rigorous sampling
methodology leading to more precise estimates of data quality please see the Data Quality Audit
(DQA) Tool and Guidelines on the MEASURE Evaluation website1.
5. Conduct site visits. Sites should be notified prior to the visit for the data quality assessment.
This notification is important in order for appropriate staff to be available to answer the questions
in the checklist and to facilitate the data verification by providing access to relevant source
documents. It is however also important not to inform sites to early in advance and to limit the
information provided on the indicator(s) verified and the reporting period assessed. This is to
avoid sites being tempted to correct, manipulate or falsify the data before the arrival of the
assessment team. During the site visits, the relevant sections of the appropriate checklists in the
Excel file are completed (e.g., the service site checklist at service sites, etc). These checklists
are completed through interviews of relevant staff and reviews of site documentation. It is
recommended that the assessment team starts the verifications at the M&E Unit before
proceeding to the relevant Intermediate Aggregation Levels (e.g., District or Provincial offices)
and then to the Service Delivery Points. Doing so provides the assessment team with the
numbers received, aggregated and reported by the M&E Unit and thus a benchmark for the
numbers the team would expect to recount at lower levels. The time required to perform the data
verifications and the systems assessment is typically as follows:
-
The M&E Unit will typically require one day (including presentation of the objective and
approach of the RDQA);
Each Intermediate Aggregation Level (e.g., District or Provincial offices) will require between
one-half and one day;
Each Service Delivery Point will require between one-half and two days (i.e., more than onehalf day may be required for large sites with reported numbers in the several hundreds or
sites that include satellite centers).
Assessment teams should however ensure flexibility in their travel schedule to accommodate
unanticipated issues and delays.
6. Review outputs and findings. The outputs from the RDQA should be reviewed for each site
visited and as a whole for the Program/project. Site-specific summary findings and
recommendations should be noted after each site visited and then consolidated for the entire
Program/project towards the end of the RDQA. The findings should stress the positive aspects
of the Program/project M&E system as it relates to data management and reporting as well as
any weaknesses identified by the assessment team. It is important to emphasize that a finding
does not necessarily mean that the Program/project is deficient in its data collection and
reporting. The Program/project may have in place a number of innovative controls and effective
steps to ensure that data are collected consistently and reliably. Nevertheless, the purpose of
the RDQA is to improve data quality. Thus, as the assessment team completes its data
management system and data verification reviews, it should clearly identify evidence and
findings that indicate the need for improvements to strengthen the design and implementation of
the M&E system. It is also important for all findings to be backed by documentary evidence.
7. Develop a system strengthening plan, including follow-up actions. Based on the findings and
recommendations for each site and for the Program/project as a whole, an overall action plan
12
should be developed (see template above) and discussed with the Program/project manager(s)
and relevant M&E staff.
8-
ETHICAL CONSIDERATIONS
The data quality assessments must be conducted with the utmost adherence to the ethical
standards of the country. While the assessment teams may require access to personal information
(e.g., medical records) for the purposes of recounting and cross-checking reported results, under no
circumstances should any personal information be disclosed in relation to the conduct of the
assessment or the reporting of findings and recommendations. The assessment team should
neither photocopy nor remove documents from sites.
13
9-
Annex 1 The Link Between the Reporting System and Data Quality
The conceptual framework of the DQA and RDQA is based on three (3) dimensions:
1. reporting levels (i.e., Service Delivery Points, Districts, Regions, etc);
2. dimensions of data quality (i.e., Accuracy, Reliability, Timeliness, etc);
3. functional components of data management systems (i.e., Data Management Processes, etc).
1- Reporting Levels
Data used to measure indicator results flow through a data-collection and reporting system that
begins with the recording of the delivery of a service to a patient/client. Data are collected on source
documents (e.g. patient records, client intake forms, registers, commodity distribution logs, etc.)
Through the Program/project reporting system, the data from source documents are aggregated
and sent to intermediate levels (e.g. a district) for further aggregation before being sent to the
highest level of a Program/project (e.g., the M&E Unit of a National Program, the Principle Recipient
for the Global Fund, or the SI Unit of a USG program). The data from countries is frequently sent to
international institutions for global aggregation to demonstrate progress in meeting the goals and
reaching the objectives related to health initiatives.
Figure 1 illustrates this data flow through all levels of a data-collection and reporting system that
includes service sites, districts and a national M&E Unit. Each country and Program/project may
have a different data flow. Data quality problems may occur at each of these levels.
NATIONAL
Monthly Report
Region 1
65
Region 2
75
Region 3
250
TOTAL
390
District 1
District 2
District 3
Monthly Report
Monthly Report
Monthly Report
SDP 1
45
SDP 4
75
SDP 4
50
SDP 2
20
TOTAL
75
SDP 5
200
TOTAL
65
TOTAL
250
Monthly Report
Monthly Report
Monthly Report
Monthly Report
ARV Nb.
Source
Doc1
45
ARV Nb.
Source
Doc 1
20
ARV Nb.
Source
Doc 1
75
ARV Nb.
Source
Doc 1
50
200
Source
Doc 1
14
Operational Definition
Accuracy
Also known as validity. Accurate data are considered correct: the data measure what they
are intended to measure. Accurate data minimize error (e.g., recording or interviewer
bias, transcription error, sampling error) to a point of being negligible.
Reliability
The data generated by a programs information system are based on protocols and
procedures that do not change according to who is using them and when or how often
they are used. The data are reliable because they are measured and collected
consistently.
Precision
This means that the data have sufficient detail. For example, an indicator requires the
number of individuals who received HIV counseling & testing and received their test
results by sex of the individual. An information system lacks precision if it is not designed
to record the sex of the individual who received counseling and testing.
Completeness
Completeness means that an information system from which the results are derived is
appropriately inclusive: it represents the complete list of eligible persons or units and not
just a fraction of the list.
Timeliness
Data are timely when they are up-to-date (current), and when the information is available
on time. Timeliness is affected by: (1) the rate at which the programs information system
is updated; (2) the rate of change of actual program activities; and (3) when the
information is actually used or required.
Integrity
Data have integrity when the system used to generate them are protected from deliberate
bias or manipulation for political or personal reasons.
Confidentiality
Confidentiality means that clients are assured that their data will be maintained according
to national and/or international standards for data. This means that personal data are not
disclosed inappropriately, and that data in hard copy and electronic form are treated with
appropriate levels of security (e.g. kept in locked cabinets and in password protected files.
15
M&E Capabilities,
Roles and
Responsibilities
II
Training
III
Questions
Dimension of Data
Quality
Accuracy, Reliability
Have the majority of key M&E and datamanagement staff received the required training?
Accuracy,
Reliability
Indicator
Definitions
Accuracy, Reliability
IV
Data Reporting
Requirements
Accuracy, Reliability,
Timeliness,
Completeness
Data Collection
and Reporting
Forms and Tools
Accuracy, Reliability
Accuracy, Precision
Confidentiality
Ability to assess
Accuracy, Precision,
Reliability,
Timeliness, and
Integrity, and
Confidentiality
Accuracy, Reliability
10
Accuracy, Reliability
Accuracy, Reliability
11
12
Ability to assess
Accuracy, Precision,
Reliability,
Timeliness, and
VI
Data
Management
Processes and
Data Quality
Controls
16
Annex 1 - Table 2. Data Management Functional Area and Key Questions to Address Data Quality
Integrity, and
Confidentiality
VII
Links with
National
Reporting
System
13
To avoid parallel
systems and undue
multiple reporting
burden on staff in
order to increase
data quality.
Answers to these 13 questions can help highlight threats to data quality and the related aspects of
the data management and reporting system that require attention. For example, if data accuracy is
an issue, the RDQA can help assess if reporting entities are using the same indicator definitions, if
they are collecting the same data elements, on the same forms, using the same instructions. The
RDQA can help assess if roles and responsibilities are clear (e.g. all staff know what data they are
collecting and reporting, when, to who and how) and if staff have received relevant training.
Table 3 lists all the questions posed in the RDQA System Assessment component and for each
question, the level at which the question is asked as well as the dimensions of data quality
addressed. This table is helpful for interpreting the graphic Dimensions of Data Quality on the
Global Dashboard of the RDQA.
Integrity
Confidentiality
Precision
Completeness
Timeliness
Reliability
Accuracy
Service Points
Aggregation
Levels
Functional Area
M&E Unit
Annex 1 - Table 3. System Assessment Questions and Links to Dimensions of Data Quality
Dimension of Data
Level
Quality
17
Annex 1 - Table 3. System Assessment Questions and Links to Dimensions of Data Quality
The responsibility for recording the delivery of services on source
documents is clearly assigned to the relevant staff.
There is a training plan which includes staff involved in datacollection and reporting at all levels in the reporting process.
II Training
18
Annex 1 - Table 3. System Assessment Questions and Links to Dimensions of Data Quality
The M&E Unit has clearly documented data aggregation, analysis
and/or manipulation steps performed at each level of the reporting
system.
[If applicable] There are quality controls in place for when data
from paper-based forms are entered into a computer (e.g., double
entry, post-data entry verification, etc).
If data discrepancies have been uncovered in reports from subreporting levels, the M&E Unit (e.g., districts or regions) has
documented how these inconsistencies have been resolved.
The M&E Unit can demonstrate that regular supervisory site visits
have taken place and that data quality has been reviewed.
19
10 -
1. Determine the number of clusters and sites. The Assessment Team should work with the
relevant stakeholders (NACA, MoH, SI Team, CCM etc.) to determine the number of clusters
and sites within clusters. The appropriate number of sites and clusters depends on the
objectives of the assessment; precise estimates of data quality require a large number of
clusters and sites. Often it isnt necessary to have a statistically robust estimate of accuracy.
That is, it is sufficient to have a reasonable estimate of the accuracy of reporting to direct
system strengthening measures and build capacity. A reasonable estimate requires far
fewer sites and is more practical in terms of resources. Generally, 12 sites sampled from
within 4 clusters (3 sites each) is sufficient to gain an understanding of the quality of the data
and the corrective measures required.
2. More than one intermediate level. In the event there is more than one Intermediate
Aggregation Level (i.e. the data flows from district to region before going to national level) a
three-stage cluster sample should be drawn. That is, two regions should be sampled and
then two districts sampled from each region (4 total districts).
3. No intermediate level. If the data is reported directly from service delivery point to the
national level (i.e. no Intermediate Aggregation Sites) the site selection will be conducted as
above (cluster sampling with the district as the primary sampling unit) but the data will not be
reviewed for the intermediate level and results from service delivery sites will be aggregated
to derive the national total.
4. Prepare the sampling frame. The first step in the selection of clusters for the assessment
will be to prepare a sampling frame, or a listing of all districts (or clusters) where the activity
is being conducted (e.g. districts with ART treatment sites). The methodology calls for
selecting clusters proportionate to size, i.e. the volume of service. Often it is helpful to
expand the sampling frame so that each cluster is listed proportionate to the size of the
program in the cluster. For example, if a given cluster is responsible for 15% of the clients
served, that cluster should comprise 15% of the elements in the sampling frame. See the
Illustrative Example Sampling Strategy D (Annex 4, Table 3) from the Data Quality Audit
Guidelines1 for more details. Be careful not to order the sampling frame in a way that will
bias the selection of the clusters. Ordering the clusters can introduce periodicity; e.g. every
3rd district is rural. Ordering alphabetically is generally a harmless way of ordering the
clusters.
5. Calculate the sampling interval. The sampling interval is obtained by dividing the number of
elements in the sampling frame by the number of elements to be sampled. Using a random
number table or similar method, randomly choose a starting point on the sampling frame.
This is the first sampled district. Then proceed through the sampling frame selecting districts
which coincide with multiples of the sample interval. The starting number + sampling interval
= 2nd cluster. The starting number + 2(sampling interval) = 3rd cluster etc.
6. Stratify Service Delivery Points. Order the service delivery points within each of the sampled
districts by volume of service, i.e. the value of the indicator for the reporting period being
assessed. Divide the list into strata according to the number of sites to be selected. If
possible, select an equal number of sites from each strata. For example, if you are selecting
three sites, create three strata (small, medium and large). If selecting two sites, create two
strata. For six sites create three strata and select two sites per stratum and so on. Divide
20
the range (subtract the smallest value from the largest) by the number of strata to establish
the cut points of the strata. If the sites are not equally distributed among the strata use your
judgment to assign sites to strata.
7. Select Service Delivery Points. For a large number of sites per district you can use a
random number table and select sites systematically as above. For a small number of sites,
simple random sampling can be used to select sites within clusters.
8. Select back up sites. If possible, select a back up site for each stratum. Use this site only if
you are unable to visit the originally selected sites due to security concerns or other factors.
Start over with a fresh sampling frame to select this site (excluding the sites already
selected). Do not replace sites based on convenience. The replacement of sites should be
discussed with the funding organization and other relevant stakeholders if possible.
9. Know your sampling methodology. The sites are intended to be selected for the assessment
as randomly (and equitably) as possible while benefiting from the convenience and economy
associated with cluster sampling. You may be asked to explain why a given site has been
selected. Be prepared to describe the sampling methods and explain the equitable selection
of sites.
21