Assignment M and e 1 Semester 1 2024
Assignment M and e 1 Semester 1 2024
Assignment M and e 1 Semester 1 2024
national development programmes of all kinds, while the ministry oversees only flagship
government programmes and projects, using real-time monitoring and rapid evaluations.
Planning, Monitoring and Evaluation Divisions are responsible for ministerial, departmental and
agency monitoring and evaluation (M&E) activities.
Activity summary
At this area of the progressive report, I will give a general overview of the work so far completed
by the team. The overview will give as much details as it can possibly give to ensure that all
stakeholders can correct and true picture of the work which has been completed by the team.
This summary can either be high – level or very specific.
Progress update
This area of the progress report highlights the progress the team is making in the project. Even if
activities will be task oriented, I will report on the progress usually either on a specific outcome
or on progress towards a specific outcome.
Challenges and obstacles
In this section of the progress report, I will give a brief explanation of any challenges and
obstacles encounters in the project and explain how these challenges were addressed. In they are
still going on, how they are or will be addressed. I would also explain if there will be need to ask
for assistance to address the challenges.
Action items and any next steps
In this section of the progress report, I will give a brief description of what the team intendeds to
do next. I would also include the activities and tasks the team intendeds to tackle so that they can
keep the project continue running.
Effective Monitoring and Evaluation Plans
A monitoring and evaluation (M&E) plan is document that helps out to track and appraise the
outcome of the interventions throughout the life of a program. It is a blueprint of the programme
that should be referred to and updated on a regular basis. Despite the fact that the specifics of
each program’s monitoring and evaluation (M&E) plan will look different, they should all follow
the same basic structure and include the same key elements.
A monitoring and evaluation (M&E) plan will include some documents that may have been
created during the program planning process, and some that will need to be created during the
implementation of the programme. For example, elements such as the logic model/logical
framework, theory of change, and monitoring indicators may have already been developed with
input from key stakeholders and/or the program donor. The monitoring and evaluation (M&E)
plan takes those documents and develops a further plan for their implementation.
As a Director, the following will be some of the steps I would use to come up with effective
monitoring and evaluation (M&E) plan:
The first step to creating a monitoring and evaluation (M&E) plan is to identify the program
goals and objectives. If the program already has a logic model or theory of change, then the
program goals are most likely already defined. However, if not, the monitoring and evaluation
(M&E) plan is a great place to start. Identify the program goals and objectives.
In this, it is also important It is also necessary to develop intermediate outputs and objectives for
the program to help track successful steps on the way to the overall program goal. More
information about identifying these objectives can be found in the logic model guide
Once I have set the program’s goals and objectives for the programme, I will then define
indicators for tracking progress towards achieving those goals set in step 1. Program indicators
will be a combination of those that measure process, or what is being done in the program, and
those that measure outcomes of the programme.
After creating monitoring indicators, I will then decide on methods for gathering data and how
often various data will be recorded to track indicators. This would be a conversation between
program staff, stakeholders, and donors. These methods will have important implications for
what data collection methods will be used and how the results will be reported.
The source of monitoring data depends largely on what each indicator is trying to measure. The
program will likely need multiple data sources to answer all of the programming questions.
Below is a table that represents some examples of what data can be collected and how.
Once I would have determined how data will be collected, it will also necessary to decide how
often it will be collected. This will be affected by donor requirements, available resources, and
the timeline of the intervention. Some data will be continuously gathered by the program (such
as the number of trainings), but these will be recorded every six months or once a year,
depending on the monitoring and evaluation (M&E) plan.
The next element of the monitoring and evaluation (M&E) plan is a section on roles and
responsibilities. As the Director, from the early planning stage, I will decide who will be
responsible for collecting the data for each indicator. This will probably be a mix of monitoring
and evaluation (M&E) staff, research staff, and program staff. Everyone will need to work
together to get data collected accurately and in a timely fashion.
Data management roles will be decided with input from all team members so that everyone is on
the same page and knows which indicators they are assigned. This way when it is time for
reporting there are no surprises.
Once all of the data would have been collected, someone will need to compile and analyze it to
fill in a results table for internal review and external reporting. This is likely to be an in-house
monitoring and evaluation (M&E manager or research assistant for the program.
I will ensure that the monitoring and evaluation (M&E) plan includes a section with details about
what data will be analyzed and how the results will be presented. I will also endure that research
staff need will perform any statistical tests to get the needed answers.
The last element of the monitoring and evaluation (M&E) plan describes how and to whom data
will be disseminated. Data for data’s sake should not be the ultimate goal of monitoring and
evaluation (M&E) efforts. Data should always be collected for particular purposes.
The monitoring and evaluation (M&E) plan should include plans for internal dissemination
among the program team, as well as wider dissemination among stakeholders and donors. For
example, a program team may want to review data on a monthly basis to make programmatic
decisions and develop future work plans, while meetings with the donor to review data and
program progress might occur quarterly or annually.
Elements of monitoring and evaluation (M&E) plans
As Director, the following are the elements I would adapt to guide me in the monitoring and
evaluation (M&E) plans:
1INDICATOR
Each indicator will be stated using clear terms that are easy to understand, and will measure only
one thing. I will ensure that if there is more than one thing to measure in the indicator, it will be
restated as separate indicators.
2. INDICATOR DEFINITION
I will provide a detailed definition of the indicator and the terms used, to ensure that different
people at different times will collect identical types of data for that indicator, and measure it the
same way. For a quantitative indicator, I will include a numerator and denominator with the
description of how the indicator measurement will be calculated.
3. BASELINE AND GOAL
I will measure the value of each indicator before project activities begin and set an achievable
goal for the indicator to be reached by the end of the project. The baseline measurement is the
starting point for tracking changes in the indicator(s) over the period of an Action Plan.
4. DATA SOURCE
I will specify the data source for each indicator. I will also consider the pros and cons of each
source (accuracy, availability, cost, etc.) to ensure access to the data. Examples of data sources
include facility records, surveys, websites, published research, and health information systems
(HIS).
5. DATA COLLECTION METHOD
I will specify the method or approach for collecting data for each indicator. For primary data
(data that teams collect themselves), I will take note the type of instrument needed to gather the
data (e.g., structured questionnaire, direct observation form, scale to weigh infants). For
indicators based on secondary data (data from existing sources), I will give guidelines on the
method of calculating the indicator.
6. FREQUENCY OF DATA COLLECTION
I will take into consideration the timing of data collection for each indicator. Depending on the
indicator, this may be monthly, quarterly, annually, or less frequently. Baseline data will be
collected for each indicator before activities begin.
7. RESPONSIBILITY FOR COLLECTING DATA
I will identify the staff members responsible for data collection. I will assign specific
responsibility to each a specific office, team, or individual.
Enhancing Data Quality
Advocacy for M&E is driven mainly by Ghana’s Voluntary Organisation for Professional
Evaluation –the Ghana Monitoring and Evaluation Forum. But more recently, the Ministry of
Monitoring and Evaluation is also promoting M&E. Public sector M&E practice has been
strengthened over the years by training and technical backstopping. One key area emphasized in
the trainings is data quality. What would you recommend to the Commission on enhancing Data
Quality?
Monitoring and evaluation (M&E) systems generate data that are essential for the generation of
vital information used to document progress toward program goals and objectives. Frequently,
these systems produce data that are incomplete, inaccurate, and tardy, owing to insufficient
capacity in the systems under review or inadequate system design.
High-quality data are at the core of program activities. As the Director, I will ensure that all
project team adhere to more stringent and systematic data quality assurance procedures, which
can be achieved with data quality assessments (DQAs). Supporting these assessments is at the
core of the work MEASURE Evaluation will perform this task on our behalf.
MEASURE Evaluation understands that data must be of high quality if they are to be relied upon
to inform decisions on programme policies, and allocation of scarce resources. I would ensure
that accurate, complete, and timely data show what is happening on the ground at all times, as
bad data call the system itself into question. MEASURE Evaluation conducts data quality
assessments and builds the capacity of low- and middle-income countries (LMICs) to conduct
their own, and to generate and use high-quality data.
Results-focused development programming demands that managers to plan and implement
programs based on evidence. Since data play a pivotal role in ascertaining effectual performance
management systems, it is necessary to ensure good data quality. In the absence of this, decision
makers will not know whether to have confidence in the data, or worse, will make decisions
based on misleading data.
Data quality is one component of a larger interconnected performance management system. Data
quality flows from a well designed and logical strategic plan where Assistance Objectives (AOs)
and Intermediate Results (IRs) are clearly identified. When a result is poorly defined, it will be
difficult to identify quality indicators, and additionally, without quality indicators, the resulting
data will frequently have data quality problems.
As a Director, I will ensure that the data we will use in all our programmes will meet the
following five data quality standards:
1. Validity
2. Reliability
3. Precision
4. . Integrity
5. Timeliness
1. Validity
Validity refers to the degree to which a measure actually signifies what it intends to measure.
Despite the fact that this principle of validation may appear to be simple, validity can be difficult
to appraise in practice, specifically when measuring social phenomena
There are a number of ways to organize or present concepts related to data validity and the
following are some of these ways:
Face validity
Face validity means that an outsider or an expert in the field would agree that the data is a true
measure of the result. For data to have high face validity, the data must be true representations of
the indicator, and the indicator must be a valid measure of the result.
Attribution
Attribution focuses on the extent to which a change in the data is related to interventions.
Measurement error
Measurement error results primarily from the poor design or management of data collection
processes.
2. Reliability
Data reliability refers to the extent to which reflect stable and consistent data collection processes
and analysis methods over time. Reliability is important so that changes in data can be
recognized as true changes rather than reflections of poor or changed data collection methods.
For example, if we use a thermometer to measure a child’s temperature repeatedly and the results
vary from 35 to 40 degrees, even though we know the child’s temperature hasn’t changed, the
thermometer is not a reliable instrument for measuring fever. In other words, if a data collection
process is unreliable due to changes in the data collection instrument, different implementation
across data collectors, or poor question choice, it will be difficult for managers to determine if
changes in data over the life of the project reflect true changes or random error in the data
collection process.
3. Precision
Precise data have a adequate level of detail to present a reasonable picture of performance and
make possible management decisionmaking. The degree of precision or detail depicted in the
data should be smaller (or finer) than the margin of error, or the tool of measurement is
considered too imprecise. For some indicators, for which the extent of expected change is large,
even relatively large measurement errors may be perfectly tolerable; for other indicators,
minimal amounts of change will be significant and even moderate levels of measurement error
will be unacceptable.
4. Integrity
Integrity focuses on whether there is improper manipulation of data. Data that are collected,
analyzed and reported should have mechanisms put in place to reduce manipulation. There are in
the main two types of matters that have an effect on data integrity. The first is transcription error.
The second, and to some extent more intricate issue, is whether there is any motivation on the
part of the data source to manipulate the data for political or personal reasons.
Transcription Error
Transcription error refers to simple data entry errors made when transcribing data from one
document (electronic or paper) or database to another. Transcription error is avoidable, and
Missions should seek to eliminate any such error when producing internal or external reports and
other documents.
Manipulation
A rather more intricate issue is whether data is manipulated. Manipulation should be considered
1) if there may be inducement on the part of those that report data to skew the data to benefit the
project or program and managers suspect that this may be a problem,
2) if managers believe that numbers appear to be unusually favorable, or
3) if the data are of high value and managers want to ensure the integrity of the data.
5.Timeliness
This test quality standard of data demands that data should be available and up to date enough to
meet management needs whenever it may be needed. There are two key aspects of timeliness.
First, data must be available frequently enough to influence management decision making.
Second, data should only present issues or, in other words, adequately up to date to be useful in
decision-making. As a general guideline, data should lag no more than three years.
References
Anderson, J. E. 2011. Public Policy Making: An Introduction 7th Edition, Boston: Wadsworth,
2011. Boston. Houghton. USA.
Babbie, E., 2002. The basics of social research. 2nd Edn., UK: Oxford University Press.
Faydi E et al. Health Research Policy and Systems. 2011. An assessment of mental health policy
in Ghana, South Africa, Uganda and Zambia
Green, L., & Kreuter, M. 1991. Evaluation and the accountable practitioner. Health promotion
planning, 2nd Edition. Mountain View, CA: Mayfield Publishing Company.
Kambuwa, M. and M. Wallis, 2002. Performance management. Durban: University of Durban-
Westville.
Kusek, J.Z. and C. Rist, 2004. Ten steps to a results-based M&E system. Washington DC: The
World Bank Publications.
Mackay, K., 1989. Public sector performance, the critical role of evaluation. Selected
Proceedings from a World Bank Seminar. Washington DC: The World Bank Publications.
Mackay, K., 1999. Evaluation capacity development. Washington DC: The World Bank
Publications.
Mackay, K., 2000. Monitoring & evaluation: Orientation course. Washington DC: The World
Bank Publications.
May, J., I. Woolard and S. Klase. 2006. The nature and measurement of poverty and inequality.
Cape Town: Davids Press.
McNeil, M. and C. Malena, 2010. Demanding good governance. Lessons from social
accountability in Africa. Washington DC: The World Bank Publications.
Mosse, D. and E.D. Lewis, 2005. The aid effect. Giving and governing in international
development. London: Pluto Press.
OECD, 1996. Development partnerships in new global context. Paris: OECD Publications.
Plaatjie, D. and S. Porter, 2006. A growing demand for monitoring and evaluation in Africa.
DPME. Johannesburg: Jacana Press.
Plaatjie, D. and S. Porter, 2013. The role of monitoring and evaluation in the public service.
Johannesburg: Jacana Press.
Preskill, H. and Torres, R.T 1999. Evaluative Inquiry for Learning in Organizations SAGE
Publications
Solomon, P. and R. Young, 2007. Performance based-earned value. San Francisco: John Wiley
and Sons Ltd Publishers.
Rossi, P. H., Lipsey, M. W., & Henry, G. T. 2019. Evaluation: A systematic approach 8th
edition. Sage Publications.
Taylor-Powell, E., & Henert, E. 2008. Developing a logic model: Teaching and training
guide. University of Wisconsin Extension, Program Development and Evaluation.
Witkin, B. R., & Altschuld, J. W. 1995. Planning and conducting needs assessments: A practical
guide. Sage Publications.