Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Section 6: Planning , Monitoring
and Evaluation of Health
Services
Monitoring and Evaluation (M&E)
• Monitoring progress and evaluating results are key functions
to improve the performance of those responsible for
implementing health services.
• M&E show whether a service/program is accomplishing its
goals. It identifies program weaknesses and strengths, areas
of the program that need revision, and areas of the program
that meet or exceed expectations.
• To do this, analysis of any or all of a program’s domains is
required
Where does M&E fit?
Monitoring versus Evaluation
Monitoring
A planned, systematic process
of observation that closely
follows a course of
activities, and compares
what is happening with
what is expected to happen
Evaluation
A process that assesses an
achievement against preset
criteria.
Has a variety of purposes, and
follow distinct
methodologies (process,
outcome, performance,
etc).
Evaluation
• A systematic process to
determine the extent to which
service needs and results have
been or are being achieved
and analyse the reasons for
any discrepancy.
• Attempts to measure service’s
relevance, efficiency and
effectiveness. It measures
whether and to what extent
the programme’s inputs and
services are improving the
quality of people’s lives.
Monitoring
• The periodic collection and
review of information on
programme implementation,
coverage and use for
comparison with
implementation plans.
• Open to modifying original
plans during implementation
• Identifies shortcomings before
it is too late.
• Provides elements of analysis
as to why progress fell short of
expectations
Comparison between Monitoring and Evaluation
Evaluation can focus on:
• Projects
normally consist of a set of activities undertaken to achieve specific
objectives within a given budget and time period.
• Programs
are organized sets of projects or services concerned with a particular
sector or geographic region
• Services
are based on a permanent structure, and, have the goal of becoming,
national in coverage, e.g. Health services, whereas programmes are
usually limited in time or area.
• Processes
are organizational operations of a continuous and supporting nature
(e.g. personnel procedures, administrative support for projects,
distribution systems, information systems, management operations).
• Conditions
are particular characteristics or states of being of persons or things (e.g.
disease, nutritional status, literacy, income level).
Processes
Services
Projects
Conditions
Programs
Evaluation may focus on different aspects of a service or program:
• Inputs
are resources provided for an activity, and include cash, supplies,
personnel, equipment and training.
• Processes
transform inputs into outputs.
• Outputs
are the specific products or services, that an activity is expected to
deliver as a result of receiving the inputs.
• A service is effective if
it “works”, i.e. it delivers outputs in accordance with its objectives.
• A service is efficient or cost-effective if
effectiveness is achieved at the lowest practical cost.
• Outcomes
refer to peoples’ responses to a programme and how they are doing
things differently as a result of it. They are short-term effects
related to objectives.
• Impacts
are the effects of the service on the people and their surroundings.
These may be economic, social, organizational, health,
environmental, or other intended or unintended results of the
programme. Impacts are long-term effects.
Processes
Inputs
Impacts
Outputs
Efficiency
Effectiveness
Outcomes
So what do you think?
• When is evaluation desirable?
When Is Evaluation Desirable?
• Program evaluation is often used when programs have
been functioning for some time. This is called
Retrospective Evaluation.
• However, evaluation should also be conducted when a new
program within a service is being introduced. These are
called Prospective Evaluations.
• A prospective evaluation identifies ways to increase the
impact of a program on clients; it examines and describes a
program’s attributes; and, it identifies how to improve
delivery mechanisms to be more effective.
Prospective versus Retrospective Evaluation
• Prospective Evaluation, determines
what ought to happen (and why)
• Retrospective Evaluation, determines what
actually happened (and why)
Evaluation Matrix
The broadest and most common classification of
evaluation identifies two kinds of evaluation:
• Formative evaluation.
Evaluation of components and activities of a program
other than their outcomes. (Structure and Process
Evaluation)
• Summative evaluation.
Evaluation of the degree to which a program has
achieved its desired outcomes, and the degree to
which any other outcomes (positive or negative) have
resulted from the program.
Components of Comprehensive Evaluation
Evaluation Designs
• Ongoing service/program evaluation
• End of program evaluation
• Impact evaluation
• Spot check evaluation
• Desk evaluation
Who conducts evaluation?
Who conducts evaluation?
• Internal evaluation (self evaluation), in which
people within a program sponsor, conduct and
control the evaluation.
• External evaluation, in which someone from
beyond the program acts as the evaluator and
controls the evaluation.
Tradeoffs between External and Internal Evaluation
Tradeoffs between External and Internal Evaluation
Source: Adapted from UNICEF Guide for Monitoring and Evaluation, 1991.
Phase A: Planning the Evaluation
• Determine the purpose of the
evaluation.
• Decide on type of evaluation.
• Decide on who conducts
evaluation (evaluation team)
• Review existing information in
programme documents including
monitoring information.
• List the relevant information
sources
• Describe the programme. *
• Assess your own strengths and
limitations.
•*Provide background
information on the history
and current status of the
programme being
evaluated including:
• How it works: its
objectives, strategies and
management process)
•Policy environment
•Economic and financial
feasibility
•Institutional capacity
•Socio-cultural aspects
•Participation and
ownership
•Environment
•Technology
Phase B:Selecting Appropriate Evaluation
Methods
• Identify evaluation goals and objectives.
(SMART)
• Formulate evaluation questions and sub-
questions
• Decide on the appropriate evaluation
design
• Identify measurement standards
• Identify measurement indicators
• Develop an evaluation schedule
• Develop a budget for the evaluation.
Sample evaluation questions: What might
stakeholders want to know?
Program clients:
• Does this program provide us
with high quality service?
• Are some clients provided with
better services than other
clients? If so, why?
Program Staff:
• Does this program provide our
clients with high quality service?
• Should staff make any changes in
how they perform their work, as
individuals and as a team, to
improve program processes and
outcomes?
Program managers:
• Does this program provide our
clients with high quality service?
• Are there ways managers can
improve or change their
activities, to improve program
processes and outcomes?
Funding bodies:
• Does this program provide its
clients with high quality service?
• Is the program cost-effective?
• Should we make changes in how
we fund this program or in the
level of funding to the program?
Characteristics of Indicators
• Clarity: easily understandable by everybody
• Useful: represent all the important dimensions of
performance
• Measurable
▫ Quantitative: rates, proportions, percentage, common
denominator (e.g., population)
▫ Qualitative: “yes” or “no”
• Reliability: can be collected consistently by
different data collectors
• Validity: measure what we mean to measure
Which Indicators?
The following questions can help determine
measurable indicators:
– How will I know if an objective has been
accomplished?
– What would be considered effective?
– What would be a success?
– What change is expected?
Phase C: Collecting and Analysing Information
• Develop data collection instruments.
• Pre-test data collection instruments.
• Undertake data collection activities.
• Analyse data.
• Interpret the data
Gathering of Qualitative and Quantitative
Information: Instruments
Qualitative tools:
There are five frequently used data collection processes in qualitative
evaluation (more than one method can be used):
1. Unobtrusive seeing, involving an observer who is not seen by those who
are observed;
2. Participant observation, involving an observer who does not take part in
an activity but is seen by the activity’s participants.
3. Interviewing, involving a more active role for the evaluator because she
/he poses questions to the respondent, usually on a one-on-one basis
4. Group-based data collection processes such as focus groups; and
5. Content analysis, which involves reviewing documents and transcripts to
identify patterns within the material
Quantitative tools:
• “Quantitative, or numeric information, is obtained
from various databases and can be expressed using
statistics.”
• Surveys/questionnaires;
• Registries
• Activity logs;
• Administrative records;
• Patient/client charts;
• Registration forms;
• Case studies;
• Attendance sheets.
Reporting Findings
• Write the evaluation report.
• Decide on the method of sharing the
evaluation results and on communication
strategies.
• Share the draft report with stakeholders and
revise as needed to be followed by follow up.
• Disseminate evaluation report.
References
• UNICEF. “Evaluation Reports Standards”, 2004.
• USAID. “Performance Monitoring and Evaluation – TIPS # 3:
Preparing an Evaluation Scope of
• Work”, 1996 and “TIPS # 11: The Role of Evaluation in USAID”,
1997, Centre for Development Information and Evaluation.
Available at http://www.dec.org/usaid_eval/#004
• U.S. Centres for Disease Control and Prevention (CDC). “Framework
for Program Evaluation in Public Health”, 1999. Available in
English at http://www.cdc.gov/eval/over.htm
• U.S. Department of Health and Human Services. Administration on
Children, Youth, and Families (ACYF), “The Program Manager’s
Guide to Evaluation”, 1997.

More Related Content

INTRODUCTION TO PROGRAMME DEVELOPMENT..ppt

  • 1. Section 6: Planning , Monitoring and Evaluation of Health Services
  • 2. Monitoring and Evaluation (M&E) • Monitoring progress and evaluating results are key functions to improve the performance of those responsible for implementing health services. • M&E show whether a service/program is accomplishing its goals. It identifies program weaknesses and strengths, areas of the program that need revision, and areas of the program that meet or exceed expectations. • To do this, analysis of any or all of a program’s domains is required
  • 4. Monitoring versus Evaluation Monitoring A planned, systematic process of observation that closely follows a course of activities, and compares what is happening with what is expected to happen Evaluation A process that assesses an achievement against preset criteria. Has a variety of purposes, and follow distinct methodologies (process, outcome, performance, etc).
  • 5. Evaluation • A systematic process to determine the extent to which service needs and results have been or are being achieved and analyse the reasons for any discrepancy. • Attempts to measure service’s relevance, efficiency and effectiveness. It measures whether and to what extent the programme’s inputs and services are improving the quality of people’s lives. Monitoring • The periodic collection and review of information on programme implementation, coverage and use for comparison with implementation plans. • Open to modifying original plans during implementation • Identifies shortcomings before it is too late. • Provides elements of analysis as to why progress fell short of expectations
  • 7. Evaluation can focus on: • Projects normally consist of a set of activities undertaken to achieve specific objectives within a given budget and time period. • Programs are organized sets of projects or services concerned with a particular sector or geographic region • Services are based on a permanent structure, and, have the goal of becoming, national in coverage, e.g. Health services, whereas programmes are usually limited in time or area. • Processes are organizational operations of a continuous and supporting nature (e.g. personnel procedures, administrative support for projects, distribution systems, information systems, management operations). • Conditions are particular characteristics or states of being of persons or things (e.g. disease, nutritional status, literacy, income level). Processes Services Projects Conditions Programs
  • 8. Evaluation may focus on different aspects of a service or program: • Inputs are resources provided for an activity, and include cash, supplies, personnel, equipment and training. • Processes transform inputs into outputs. • Outputs are the specific products or services, that an activity is expected to deliver as a result of receiving the inputs. • A service is effective if it “works”, i.e. it delivers outputs in accordance with its objectives. • A service is efficient or cost-effective if effectiveness is achieved at the lowest practical cost. • Outcomes refer to peoples’ responses to a programme and how they are doing things differently as a result of it. They are short-term effects related to objectives. • Impacts are the effects of the service on the people and their surroundings. These may be economic, social, organizational, health, environmental, or other intended or unintended results of the programme. Impacts are long-term effects. Processes Inputs Impacts Outputs Efficiency Effectiveness Outcomes
  • 9. So what do you think? • When is evaluation desirable?
  • 10. When Is Evaluation Desirable? • Program evaluation is often used when programs have been functioning for some time. This is called Retrospective Evaluation. • However, evaluation should also be conducted when a new program within a service is being introduced. These are called Prospective Evaluations. • A prospective evaluation identifies ways to increase the impact of a program on clients; it examines and describes a program’s attributes; and, it identifies how to improve delivery mechanisms to be more effective.
  • 11. Prospective versus Retrospective Evaluation • Prospective Evaluation, determines what ought to happen (and why) • Retrospective Evaluation, determines what actually happened (and why)
  • 12. Evaluation Matrix The broadest and most common classification of evaluation identifies two kinds of evaluation: • Formative evaluation. Evaluation of components and activities of a program other than their outcomes. (Structure and Process Evaluation) • Summative evaluation. Evaluation of the degree to which a program has achieved its desired outcomes, and the degree to which any other outcomes (positive or negative) have resulted from the program.
  • 14. Evaluation Designs • Ongoing service/program evaluation • End of program evaluation • Impact evaluation • Spot check evaluation • Desk evaluation
  • 16. Who conducts evaluation? • Internal evaluation (self evaluation), in which people within a program sponsor, conduct and control the evaluation. • External evaluation, in which someone from beyond the program acts as the evaluator and controls the evaluation.
  • 17. Tradeoffs between External and Internal Evaluation
  • 18. Tradeoffs between External and Internal Evaluation Source: Adapted from UNICEF Guide for Monitoring and Evaluation, 1991.
  • 19. Phase A: Planning the Evaluation • Determine the purpose of the evaluation. • Decide on type of evaluation. • Decide on who conducts evaluation (evaluation team) • Review existing information in programme documents including monitoring information. • List the relevant information sources • Describe the programme. * • Assess your own strengths and limitations. •*Provide background information on the history and current status of the programme being evaluated including: • How it works: its objectives, strategies and management process) •Policy environment •Economic and financial feasibility •Institutional capacity •Socio-cultural aspects •Participation and ownership •Environment •Technology
  • 20. Phase B:Selecting Appropriate Evaluation Methods • Identify evaluation goals and objectives. (SMART) • Formulate evaluation questions and sub- questions • Decide on the appropriate evaluation design • Identify measurement standards • Identify measurement indicators • Develop an evaluation schedule • Develop a budget for the evaluation.
  • 21. Sample evaluation questions: What might stakeholders want to know? Program clients: • Does this program provide us with high quality service? • Are some clients provided with better services than other clients? If so, why? Program Staff: • Does this program provide our clients with high quality service? • Should staff make any changes in how they perform their work, as individuals and as a team, to improve program processes and outcomes? Program managers: • Does this program provide our clients with high quality service? • Are there ways managers can improve or change their activities, to improve program processes and outcomes? Funding bodies: • Does this program provide its clients with high quality service? • Is the program cost-effective? • Should we make changes in how we fund this program or in the level of funding to the program?
  • 22. Characteristics of Indicators • Clarity: easily understandable by everybody • Useful: represent all the important dimensions of performance • Measurable ▫ Quantitative: rates, proportions, percentage, common denominator (e.g., population) ▫ Qualitative: “yes” or “no” • Reliability: can be collected consistently by different data collectors • Validity: measure what we mean to measure
  • 23. Which Indicators? The following questions can help determine measurable indicators: – How will I know if an objective has been accomplished? – What would be considered effective? – What would be a success? – What change is expected?
  • 24. Phase C: Collecting and Analysing Information • Develop data collection instruments. • Pre-test data collection instruments. • Undertake data collection activities. • Analyse data. • Interpret the data
  • 25. Gathering of Qualitative and Quantitative Information: Instruments Qualitative tools: There are five frequently used data collection processes in qualitative evaluation (more than one method can be used): 1. Unobtrusive seeing, involving an observer who is not seen by those who are observed; 2. Participant observation, involving an observer who does not take part in an activity but is seen by the activity’s participants. 3. Interviewing, involving a more active role for the evaluator because she /he poses questions to the respondent, usually on a one-on-one basis 4. Group-based data collection processes such as focus groups; and 5. Content analysis, which involves reviewing documents and transcripts to identify patterns within the material
  • 26. Quantitative tools: • “Quantitative, or numeric information, is obtained from various databases and can be expressed using statistics.” • Surveys/questionnaires; • Registries • Activity logs; • Administrative records; • Patient/client charts; • Registration forms; • Case studies; • Attendance sheets.
  • 27. Reporting Findings • Write the evaluation report. • Decide on the method of sharing the evaluation results and on communication strategies. • Share the draft report with stakeholders and revise as needed to be followed by follow up. • Disseminate evaluation report.
  • 28. References • UNICEF. “Evaluation Reports Standards”, 2004. • USAID. “Performance Monitoring and Evaluation – TIPS # 3: Preparing an Evaluation Scope of • Work”, 1996 and “TIPS # 11: The Role of Evaluation in USAID”, 1997, Centre for Development Information and Evaluation. Available at http://www.dec.org/usaid_eval/#004 • U.S. Centres for Disease Control and Prevention (CDC). “Framework for Program Evaluation in Public Health”, 1999. Available in English at http://www.cdc.gov/eval/over.htm • U.S. Department of Health and Human Services. Administration on Children, Youth, and Families (ACYF), “The Program Manager’s Guide to Evaluation”, 1997.