Basic Guide Program Evaluation
Basic Guide Program Evaluation
Program Evaluation
Some Myths About Program Evaluation
1.. Many people believe evaluation is a useless activity that generates lots of boring data with useless conclusions.
This was a problem with evaluations in the past when program evaluation methods were chosen largely on the basis
of achieving complete scientific accuracy, reliability and validity. This approach often generated extensive data from
which very carefully chosen conclusions were drawn. Generalizations and recommendations were avoided. As a
result, evaluation reports tended to reiterate the obvious and left program administrators disappointed and skeptical
about the value of evaluation in general. More recently (especially as a result of Michael Patton's development of
utilization-focused evaluation), evaluation has focused on utility, relevance and practicality at least as much as
scientific validity.
2. Many people believe that evaluation is about proving the success or failure of a program. This myth assumes that
success is implementing the perfect program and never having to hear from employees, customers or clients again --
the program will now run itself perfectly. This doesn't happen in real life. Success is remaining open to continuing
feedback and adjusting the program accordingly. Evaluation gives you this continuing feedback.
3. Many believe that evaluation is a highly unique and complex process that occurs at a certain time in a certain way,
and almost always includes the use of outside experts. Many people believe they must completely understand terms
such as validity and reliability. They don't have to. They do have to consider what information they need in order to
make current decisions about program issues or needs. And they have to be willing to commit to understanding what
is really going on. Note that many people regularly undertake some nature of program evaluation -- they just don't
do it in a formal fashion so they don't get the most out of their efforts or they make conclusions that are inaccurate
(some evaluators would disagree that this is program evaluation if not done methodically). Consequently, they miss
precious opportunities to make more of difference for their customer and clients, or to get a bigger bang for their
buck.
Other Reasons:
Program evaluation can:
4. Facilitate management's really thinking about what their program is all about, including its goals, how it meets it
goals and how it will know if it has met its goals or not.
5. Produce data or verify results that can be used for public relations and promoting services in the community.
6. Produce valid comparisons between programs to decide which should be retained, e.g., in the face of pending
budget cuts.
7. Fully examine and describe effective programs for duplication elsewhere.
Key Considerations:
Consider the following key questions when designing a program evaluation.
1. For what purposes is the evaluation being done, i.e., what do you want to be able to decide as a result of the
evaluation?
2. Who are the audiences for the information from the evaluation, e.g., customers, bankers, funders, board,
management, staff, customers, clients, etc.
3. What kinds of information are needed to make the decision you need to make and/or enlighten your intended
audiences, e.g., information to really understand the process of the product or program (its inputs, activities and
outputs), the customers or clients who experience the product or program, strengths and weaknesses of the product
or program, benefits to customers or clients (outcomes), how the product or program failed and why, etc.
4. From what sources should the information be collected, e.g., employees, customers, clients, groups of customers
or clients and employees together, program documentation, etc.
5. How can that information be collected in a reasonable fashion, e.g., questionnaires, interviews, examining
documentation, observing customers or employees, conducting focus groups among customers or employees, etc.
6. When is the information needed (so, by when must it be collected)?
7. What resources are available to collect the information?
Goals-Based Evaluation
Often programs are established to meet one or more specific goals. These goals are often described in the original
program plans.
Goal-based evaluations are evaluating the extent to which programs are meeting predetermined goals or objectives.
Questions to ask yourself when designing an evaluation to see if you reached your goals, are:
1. How were the program goals (and objectives, is applicable) established? Was the process effective?
2. What is the status of the program's progress toward achieving the goals?
3. Will the goals be achieved according to the timelines specified in the program implementation or operations plan?
If not, then why?
4. Do personnel have adequate resources (money, equipment, facilities, training, etc.) to achieve the goals?
5. How should priorities be changed to put more focus on achieving the goals? (Depending on the context, this
question might be viewed as a program management decision, more than an evaluation question.)
6. How should timelines be changed (be careful about making these changes - know why efforts are behind schedule
before timelines are changed)?
7. How should goals be changed (be careful about making these changes - know why efforts are not achieving the
goals before changing the goals)? Should any goals be added or removed? Why?
8. How should goals be established in the future?
Process-Based Evaluations
Process-based evaluations are geared to fully understanding how a program works -- how does it produce that
results that it does. These evaluations are useful if programs are long-standing and have changed over the years,
employees or customers report a large number of complaints about the program, there appear to be large
inefficiencies in delivering program services and they are also useful for accurately portraying to outside parties how
a program truly operates (e.g., for replication elsewhere).
There are numerous questions that might be addressed in a process evaluation. These questions can be selected by
carefully considering what is important to know about the program. Examples of questions to ask yourself when
designing an evaluation to understand and/or closely examine the processes in your programs, are:
1. On what basis do employees and/or the customers decide that products or services are needed?
2. What is required of employees in order to deliver the product or services?
3. How are employees trained about how to deliver the product or services?
4. How do customers or clients come into the program?
5. What is required of customers or client?
6. How do employees select which products or services will be provided to the customer or client?
7. What is the general process that customers or clients go through with the product or program?
8. What do customers or clients consider to be strengths of the program?
9. What do staff consider to be strengths of the product or program?
10. What typical complaints are heard from employees and/or customers?
11. What do employees and/or customers recommend to improve the product or program?
12. On what basis do employees and/or the customer decide that the product or services are no longer needed?
Outcomes-Based Evaluation
Program evaluation with an outcomes focus is increasingly important for nonprofits and asked for by funders.An
outcomes-based evaluation facilitates your asking if your organization is really doing the right program activities to
bring about the outcomes you believe (or better yet, you've verified) to be needed by your clients (rather than just
engaging in busy activities which seem reasonable to do at the time). Outcomes are benefits to clients from
participation in the program. Outcomes are usually in terms of enhanced learning (knowledge, perceptions/attitudes
or skills) or conditions, e.g., increased literacy, self-reliance, etc. Outcomes are often confused with program outputs
or units of services, e.g., the number of clients who went through a program.
The United Way of America (http://www.unitedway.org/outcomes/) provides an excellent overview of outcomesbased evaluation, including introduction to outcomes measurement, a program outcome model, why to measure
outcomes, use of program outcome findings by agencies, eight steps to success for measuring outcomes, examples
of outcomes and outcome indicators for various programs and the resources needed for measuring outcomes. The
following information is a top-level summary of information from this site.
To accomplish an outcomes-based evaluation, you should first pilot, or test, this evaluation approach on one or two
programs at most (before doing all programs).
The general steps to accomplish an outcomes-based evaluation include to:
1. Identify the major outcomes that you want to examine or verify for the program under evaluation. You might
reflect on your mission (the overall purpose of your organization) and ask yourself what impacts you will have on
your clients as you work towards your mission. For example, if your overall mission is to provide shelter and
resources to abused women, then ask yourself what benefits this will have on those women if you effectively
provide them shelter and other services or resources. As a last resort, you might ask yourself, "What major activities
are we doing now?" and then for each activity, ask "Why are we doing that?" The answer to this "Why?" question is
usually an outcome. This "last resort" approach, though, may just end up justifying ineffective activities you are
doing now, rather than examining what you should be doing in the first place.
2. Choose the outcomes that you want to examine, prioritize the outcomes and, if your time and resources are
limited, pick the top two to four most important outcomes to examine for now.
3. For each outcome, specify what observable measures, or indicators, will suggest that you're achieving that key
outcome with your clients. This is often the most important and enlightening step in outcomes-based evaluation.
However, it is often the most challenging and even confusing step, too, because you're suddenly going from a rather
intangible concept, e.g., increased self-reliance, to specific activities, e.g., supporting clients to get themselves to and
from work, staying off drugs and alcohol, etc. It helps to have a "devil's advocate" during this phase of identifying
indicators, i.e., someone who can question why you can assume that an outcome was reached because certain
associated indicators were present.
4. Specify a "target" goal of clients, i.e., what number or percent of clients you commit to achieving specific
outcomes with, e.g., "increased self-reliance (an outcome) for 70% of adult, African American women living in the
inner city of Minneapolis as evidenced by the following measures (indicators) ..."
5. Identify what information is needed to show these indicators, e.g., you'll need to know how many clients in the
target group went through the program, how many of them reliably undertook their own transportation to work and
stayed off drugs, etc. If your program is new, you may need to evaluate the process in the program to verify that the
program is indeed carried out according to your original plans. (Michael Patton, prominent researcher, writer and
consultant in evaluation, suggests that the most important type of evaluation to carry out may be this implementation
evaluation to verify that your program ended up to be implemented as you originally planned.)
6. Decide how can that information be efficiently and realistically gathered (see Selecting Which Methods to Use
below). Consider program documentation, observation of program personnel and clients in the program,
questionnaires and interviews about clients perceived benefits from the program, case studies of program failures
and successes, etc. You may not need all of the above. (see Overview of Methods to Collect Information below).
7. Analyze and report the findings (see Analyzing and Interpreting Information below).
Overall Purpose
Advantages
Challenges
questionnaires,
surveys,
checklists
interviews
documentation
review
observation
focus groups
case studies
Also see:
Appreciative Inquiry
Survey Design
2. Organize comments into similar categories, e.g., concerns, suggestions, strengths, weaknesses, similar
experiences, program inputs, recommendations, outputs, outcome indicators, etc.
3. Label the categories or themes, e.g., concerns, suggestions, etc.
4. Attempt to identify patterns, or associations and causal relationships in the themes, e.g., all people who attended
programs in the evening had similar concerns, most people came from the same geographic area, most people were
in the same salary range, what processes or events respondents experience during the program, etc.
4. Keep all commentary for several years after completion in case needed for future reference.
Interpreting Information:
1. Attempt to put the information in perspective, e.g., compare results to what you expected, promised results;
management or program staff; any common standards for your services; original program goals (especially if you're
conducting a program evaluation); indications of accomplishing outcomes (especially if you're conducting an
outcomes evaluation); description of the program's experiences, strengths, weaknesses, etc. (especially if you're
conducting a process evaluation).
2. Consider recommendations to help program staff improve the program, conclusions about program operations or
meeting goals, etc.
3. Record conclusions and recommendations in a report document, and associate interpretations to justify your
conclusions or recommendations.
Pitfalls to Avoid
1. Don't balk at evaluation because it seems far too "scientific." It's not. Usually the first 20% of effort will generate
the first 80% of the plan, and this is far better than nothing.
2. There is no "perfect" evaluation design. Don't worry about the plan being perfect. It's far more important to do
something, than to wait until every last detail has been tested.
3. Work hard to include some interviews in your evaluation methods. Questionnaires don't capture "the story," and
the story is usually the most powerful depiction of the benefits of your services.
4. Don't interview just the successes. You'll learn a great deal about the program by understanding its failures,
dropouts, etc.
5. Don't throw away evaluation results once a report has been generated. Results don't take up much room, and they
can provide precious information later when trying to understand changes in the program.
General Resources
Guides for many types of evaluation
Program Manager's Guide to Evaluation
Analytical Methods in Maternal and Child Health