Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

M & E Test

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10
At a glance
Powered by AI
The key takeaways are that monitoring focuses on implementation and process, while evaluation focuses on outcomes and impact. Monitoring ensures projects are being implemented as planned, while evaluation assesses if the right things are being done.

Monitoring focuses on implementation progress and ensuring activities are being undertaken as planned, while evaluation assesses higher level outcomes and impact and verifies some monitoring findings. Monitoring is an internal activity during implementation, while evaluation can be externally led and occurs at predetermined points.

Monitoring questions focus on outputs, process, and implementation adherence. Evaluation questions focus on outcomes, effectiveness, significance, and worth of outcomes relative to effort. Examples provided include questions about reach, implementation quality, changes in outcomes, and significance of outcomes.

Differences Between Monitoring and Evaluation

MOniTORinG MiD-TERM OR FinAL EVALuATiOn Provides information enabling Relies on more detailed data (e.g., management staff to assess from surveys or studies) in addition to implementation progress and make that collected through the monitoring timely decisions. system to understand the project in greater depth. Is concerned with verifying that project Assesses higher level outcomes activities are being undertaken, and impact and may verify some of services are being delivered, and the findings from the monitoring. the project is leading to the desired Evaluations should explore both behavior changes described in the anticipated and unanticipated results. project proposal. Is an internal project activity. Can be externally led (particularly endof-project evaluations), though they should involve the active participation of project staff. Is an essential part of good day-to-day Is an essential activity in a longer-term management practice. dynamic learning process. Is an essential part of day-to-day Is important for making decisions on management and must be integrated overall project direction. within the project management structure. Takes place during the implementation Occurs at predetermined points phase. during implementation. Other smaller

evaluations may be undertaken to meet specific information needs throughout the process. Generally focuses on the question Are Generally focuses on the question Are we doing things right? we doing the right thing?

Coverage Outputs (Products, Services, Deliverables, Reach)

Monitoring Question Examples How many people or communities were reached or served?Were the targeted numbers reached?

Evaluation Question Examples How adequate was program reach?Did we reach enough people? Did we reach the right people? How well was the program implemented? Fairly, ethically, legally, culturally appropriately, professionally, efficiently?For outreach, did we use the best avenues and methods we could have?How well did we access hard-toreach and vulnerable populations? Did we reach those with the greatest need? Who missed out, and was that fair, ethical, just? How substantial and valuable were the outcomes?How well did they meet the most important needs and help realize the most important aspirations?Should they be considered truly impressive, mediocre, or unacceptably weak? Were they not just statistically significant, but educationally, socially, economically, and practically significant? Did they make a real difference in peoples lives?

How was the program Process (Design & implemented?Was Implementation) implementation in accordance with design and specifications?

Outcomes (things that happen to people or communities)

What has changed since (and as a result of) program implementation?How much have outcomes changed relative to targets?

Were the outcomes worth achieving given the effort and investment put into obtaining them?

3.

Steps to Developing an M&E Work Plan


1.Identify program goals and objectives. 2.Determine M&E questions, indicators, and their feasibility. 3.Determine M&E methodology for monitoring the process and evaluating the effects. 4.Resolve implementation issues: Who will conduct the monitoring and evaluation? How will existing M&E data and data from past evaluation studies be used? 5.Identify internal and external M&E resources and capacity. 6.Develop an M&E Work Plan matrix and timeline. 7.Develop plan to disseminate and use evaluation findings.
Seven Steps to Developing a Moni toring and Evaluation Work Plan
If these elements have not already been developed, these steps may help to create a monitoring and evaluation plan.

Identify Program Goals and Objectives


The first step requires writing a clear statement that identifies country (site) program goals and objectives (and sometimes sub-objectives) and describes how the program expects to achieve them. A program logical model or results framework can then be easily diagrammed to establish a monitoring and evaluation plan. The country evaluation matrix in the appendix i llustrates a results framework (with sample goals, objectives, activities, indicators, sources of data and methods, periodicity, and persons responsible) to gather monitoring and evaluation data at the country level. This framework illustrates how the role of national governme nts in monitoring and planning HIV prevention and care activities complements the streng ths of individual projects at the local level. For example, individual projects do not often conduct impact evaluations because the results are hard to separate from those of other projects that work toward the same goals. Impact evaluation, most appropriately measured in large geographic areas, examines whether the collective efforts of numerous projects are producing the desired effect. These im pacts can be measured through sero-surveillance systems (which monitor trends in HIV and STI prevalen ce) and through repeated behavioral risk surveys. Local organizations in direct contact with target gr oups should evaluate the programs implementation, rather than its outcome or impact. This demands greater concentration on quality inputs, such as training and the pre-testing of communication messages. This framework also illustrates th e time required to show progress at various levels, ranging from several months for process-level accomplishments (the training of staff) to several years for outcomeand impact-level goals.

2
Determine Monitoring and Evaluati on Questions, Indicators, and Their Feasibility
In this step, monitoring and evaluation specialists and program managers identify the most important evaluation questions, which should link directly to the stated goals and objectives. Questions should come from all stakeholders, including the program managers, donors, and members of the target populations. The questions should address each grou ps concerns, focusing on these areas: What do we want to know at the end of this program? and What do we expect to change by the end of this program? Framing and prioritizing monitoring and evaluation questions is sometimes difficult, especially when resources, time, and expertise are limited and where multiple stakehol ders are present. Monitoring and evaluation questions may require revision later in the plan development process.
Handout Core Module 3: Appendix: page 5 Developing a Monitoring and Evaluation Work Plan

3
Determine Monitoring and Evaluati on MethodologyMonitoring the Process and Evalua ting the Effects
This step should include the monitoring and evalua tion methods, data collection methods and tools, analysis plan, and an overall timeline. It is crucial to clearly spell out how data will be collected to answer the monitoring and evaluation questions. The planning team determines the appr opriate monitoring and evaluation methods, outcome measures or indicators, information needs, and the methods by which the data will be gathered and analyzed. A plan must be developed to collect and process data and to maintain an accessible data system. The plan should address the following issues: What information needs to be monitored? How will the information be collected? How will it be recorded? Ho w will it be reported to the central office? What tools (forms) will be needed? For issues that require more sophisticated data collection, what study design will be used? Will the data be qualitative, quantitative, or a combination of the two? Which outcomes will be measured ? How will the data be analyzed and disseminated?

4
Resolve Implementation Issues: Who Will Conduct Monitoring and Evaluation? How Will Existing Monito ring and Evaluation Results and Past Findings Be Used?
Once the data collection methods are established, it is important to clearly state who will be responsible for each activity. Program managers must decide: How are we really going to implement this plan? Who will report the process data and who will collect and analyze it? Who will oversee any quantitative data collection and who will be respon sible for its analysis? Clearly, the success of the plan depends on the technical capacity of program staff to carry out monitoring and evaluation activities. This invariably requires technical assist ance. Monitoring and evalua tion specialists might be found in planning and evaluation units of the Nation al AIDS Commission, National AIDS and STD Control Program, Census Bureau, Office of Statistics, multis ectoral government ministri es including Ministry of Health, academic institutions, non-governmental organizations, and private consulting firms. It is important to identify existing data sources and other monitoring and evaluation activities, whether they have been done in the past, are ongoing, or have been sponsored by ot her donors. At this step, country office monitoring and evaluation speciali sts should determine whether other groups are planning similar evaluations and, if so, invite them to collaborate.

5 Identify Internal and External Monitoring and Evaluation Resources


and Capacity
Identifying monitoring and evaluation resources means identifying not just the funds for monitoring and evaluation, but also experienced personnel who can assist in planning and conducting monitoring and evaluation activities. It also means determining the programs capacity to manage and link various databases and computer systems.

6 Develop the Monitoring and Evaluation Work Plan Matrix and Timeline
The matrix provides a format for presenting the inputs, outputs, outcomes, and impacts and their corresponding activitiesfor each program objective. It summarizes the overall monitoring and evaluation plan by including a list of methods to be used in collecting the data. (An example of a matrix is presented in the appendix.) The timeline shows when each activity in the Monitoring and Evaluation Work Plan will take place.

7 Develop Plan to Disseminate and Use Evaluation Findings


The last step is planning how monitoring and evaluation results will be used, translated into program policy language, disseminated to relevant stakeholders and decision-makers, and used for ongoing program refinement. This step is not always performed, but it should be. It is extremely useful in ensuring that monitoring and evaluation findings inform program improvement and decision-making. A mechanism for providing feedback to program and evaluation planners should be built-in so that lessons learned can be applied to subsequent efforts. This step often surfaces only when a complication at the end of the program prompts someone to ask, How has monitoring and evaluation been implemented and how have the results been used to improve HIV prevention and care programs and policies? If no plan was in place for disseminating monitoring and evaluation results, this question often cannot be answered because monitoring and evaluation specialists have forgotten the details or have moved on. The absence of a plan can undermine the usefulness of current monitoring an d evaluation efforts and future ac tivities. Inadequate dissemination might lead to duplicate monitoring and evaluation efforts because ot hers are not aware of the earlier effort. It also reinforces the negative stereotype that monitoring and evaluation are not truly intended to improve programs. For these reasons, programs should include a plan for disseminating and using monitoring and evaluation results in their ov erall Monitoring and Ev aluation Work Plan

Step By Step Guide to Create your M&E Plan


| |

Step 1. Identify your evaluation audience


Identify who the evaluation audience or stakeholders are. The evaluation audience include the people or organisations that require an evaluation to be conducted. There may be multiple audiences, each with their own requirements. Typically, this includes the funding agency, and may also include partner organisations, the Council (or Councillors), the project team, and the projects participants or target group. Remember that evaluation is generally undertaken for accountability, or learning, and preferably both together.

If you have limited funds for evaluation, you may have to prioritise your evaluation by identifying who are the most important people to report to. Download M&E Audience and Evaluation Questions Template

Step 2. Define the evaluation questions


Evaluation questions should be developed up-front, and in collaboration with the primary audience(s) and other stakeholders who you intend to report to. Evaluation questions go beyond measurements to ask the higher order questions such as whether the intervention is worth it, or could if have been achieved in another way (see examples below). Overall, evaluation questions should lead to further action such as project improvement, project mainstreaming, or project redesign. You should also identify at this stage whether the evaluation audience has specific timelines by which it requires an evaluation report. This will be a major factor in deciding what you can and cannot collect. Broad types of evaluation questions by focus area
Type of evaluation Process Outcome Evaluation question How well was the project designed and implemented (i.e. its quality) Did the project meet the overall needs? Was any change significant and was it attributable to the project? How valuable are the outcomes to the organisation, other stakeholders, and participants? Learnings What worked and what did not? What were unintended consequences? What were emergent properties? Investment Was the project cost effective? Was there another alternative that may have represented a better investment? What next Can the project be scaled up? Can the project be replicated elsewhere? Is the change self-sustaining or does it require continued intervention? Theory of change Does the project have a theory of change?

Is the theory of change reflected in the program logic? How can the program logic inform the research questions?

Source: Davidson & Wehipeihana (2010)

Another way of classifying broad evaluation questions is presented below.

Focus of the Evaluation Relevance

Evaluation question Does the workshop topic and contents meet the information needs of the target group? To what extent is the intervention goal in line with the needs and priorities of the community? Did the engagement method used in this project lead to similar numbers of participants as previous or other programs at a comparable or lesser cost? Have the more expensive engagement approaches led to better results than the less expensive engagement approaches? To what extent did the workshops lead to increased community support for action to tackle climate change? To what extent was did the engagement method encourage the target group to take part in the project? To what extent has the project led to more sustainable behaviours in the target group? Were there any other unintended positive or negative outcomes from the project? To what extent has the project led to the long-term behaviour change?

Efficiency

Effectiveness

Outcome

Sustainability

Source: Europe Aid Cooperation Office

Step 3. Identify the monitoring questions


In order to answer evaluation questions, monitoring questions must be developed that will inform what data will be collected through the monitoring process. Monitoring questions are quite specific in what they ask, compared to evaluation questions. For example, for an evaluation question of "What worked and what did not?" you may have several specific questions such as "Did the workshops lead to increased knowledge on energy efficiency in the home?" or "Did participants install water efficient showerheads". The monitoring questions will ideally be answered through the collection of quantitative and qualitative data. It is important to not leap straight into the collection of data, without thinking about the evaluation questions. Jumping straight in may lead to collecting data that provides no useful information, which is a waste of time and money. Download the M&E Plan Template to input the information.

If you have developed a program logic, you can use this to start identifying relevant monitoring questions and indicators. Click here to see how. (LINK TO PPT INDENTIFYING MONITORING QUESTIONS FROM PROGRAM LOGIC) Once you have identified monitoring question in your program logic, you can transfer them into your M&E Plan Template. Click here to see how. (LINK TO PPT FROM PROGRAM LOGIC TO M&E)

Step 4. Identify the indicators and data sources


The next step is to identify what information you need to answer your monitoring questions (indicators) and where this information will come from (data sources). Data sources could be participant themselves, or peoples homes (eg. audit of lighting types) or metering, or even literature. You can then decide on the most appropriate method to collect the data from the data source. The evaluation method selector is there to assist you in selecting an appropriate method for your needs.

Step 5. Identify who is responsible for data collection and timelines


It is advisable to assign responsibility for the data collection so that everyone is clear of their roles and responsibilities. This also allows new staff to come onto the project and get a sense of who is responsible for what, and what they may have to take on and when. Collection of monitoring data may occur regularly over short intervals, or less regularly, such as half-yearly or annually. Again, assigning timelines limits the excuse of not knowing. You may also want to note any requirements that are needed to collect the data (staff, budget etc). It is advisable to have some idea of the cost associated with monitoring, as you may have great ideas to collect a lot of information, only to find out that you cannot afford it all. In such a case, you will have to either prioritise or find some money elsewhere (sorry but we have no special tool for that).

Step 6. Identify who will evaluate the data, how it will be reported, and when
This step is optional but highly recommended, as it will round off the M&E plan as a complete document. Remembering that evaluation is the subjective assessment of a projects worth, it is important to identify who will be making this subjective assessment. In most cases, it will be the project team, but in some cases, you may involve other stakeholders including the target group or participants. You may also consider outsourcing a particular part of the evaluation to an external or independent party. For an evaluation to be used (and therefore useful) it is important to present the findings in a format that is appropriate to the audience. This may mean a short report, or a memo, or even a poster or newsletter. As such, it is recommended that you consider how you will present your evaluation from the start, so that you can tailor the way to present your findings to the presentation format (such as graphs, tables, text, images).

Step 7. Review the M&E plan


Once you have completed your M&E plan, highlight data sources that appear frequently. For example, you may be able to develop surveys that fulfil the data collection requirements for many questions. Also consider re-ordering the M&E plan in several ways, for example, by data source, or by data collection timeframe. Finally, go through this checklist. Does your M&E plan:

Focus on the key evaluation questions and the evaluation audience? Capture all that you need to know in order to make a meaningful evaluation of the project? Only asks relevant monitoring questions and avoids the collection of unnecessary data? Know how data will be analysed, used and reported? Work within your budget and other resources? Identify the skills required to conduct the data collection and analysis?

You might also like