Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Public Policy Cycle-

“EVALUATION OF POLICY”
DIFFERENCE BETWEEN POLICY
ANALYSIS AND EVALUATION
Evaluation differ from the
Analysis performed in
problem formulation,
selection of criteria and
comparison of alternatives.
The analyst uses the same
information and ask some of
the main question, as in the
original analysis , but in the
evaluation the set of
alternatives and the scope of
creative problem redefinition
, are sharply circumscribed.

Evaluating Public Programs: Program evaluation is a way of bringing to public


decision-makers the available knowledge about a problem, about the relative
effectiveness of past and current strategies for addressing or reducing that problem,
and about the observed effectiveness of particular programs.
Why Is Policy Evaluation Important?
Policy evaluation, like all evaluation, can serve important
purposes along the entire chain of the policy process, including
the following
• Documenting policy development
• Documenting and informing implementation
• Assessing support and compliance with existing policies ƒ
• Demonstrating impacts and value of a policy
• Informing an evidence base ƒ
• Informing future policies ƒ
• Providing accountability for resources invested
Administrative Purposes for Evaluation
Policy formulation – to assess or justify the need for a new program and to design it optimally on the
basis of past experience.
– Information on the problem addressed by the program: how big is it? What is its frequency and
direction? How is it changing?
– Information on the results of past programs that dealt with the problem: were those programs
feasible? Were they successful? What difficulties did they encounter?
– Information allowing the selection of one program over another: what are the comparative costs
and benefits? What kinds of growth records were experienced?
Policy execution – to ensure that a program is implemented in cost-effective and technically competent way.
– Information on program implementation: how operational is the program? How similar is it across
sites? Does it conform to the policies and expectations formulated? How much does it cost? How
do stakeholders feel about it? Are there delivery problems or error, fraud, and abuse?
– Information on program management: what degree of control exists over expenditures? What are
the qualifications and credentials of the personnel? What is the allocation of resources? How is
program information used in decision making?
– Ongoing information on the current state of the problem or threat addressed in the program: is the
problem growing? Is it diminishing? Is it diminishing enough so that the program is no longer
needed? Is it changing in terms of its significant characteristics.
Accountability in public decision making – to determine the effectiveness of an operating program and the
need for its continuation, modification, or termination.
– Information on program outcomes or effects: what happened as a result of program
implementation?
– Information on the degree to which the program made or is making a difference: what change in
the problem or threat has occurred that can be directly attributed to the program?
– Information on the unexpected (and expected) effects of the program.
Functions and Roles of Evaluation Sponsors
• As a general rule, public administrators should expect their work on program
effectiveness and feasibility to be of more general use than their work on
implementation, which will be of most use to program managers and agency
heads.
• Information needs will be larger for large programs than small, new programs
over old.

Executive branch (Federal, Province, local).


– Program managers (cost-effectiveness).
– Agency heads and top policy makers (need, effectiveness).
– Central budget or policy authorities (effectiveness, need).
Legislative branch:
– Parliamentarian and legislative policy and evaluation offices (all aspects).
– Legislative authorization, appropriations, and budget committees (program
funding and refunding).
– Oversight committees (all aspects).
POLICY EVALUATION
CONCEPT

Evaluation is a Seek to identify


critical component factors that
of policy making, Estimation, contributed to the
at all levels. appraisal, policy process
Evaluations allow assessment of (problem
informed design policy, its content, identification,
and modifications implementation, formulation,
of policies and goal attainment adoption and so
program, to and other effects. on) in order to
increase continue, modify,
effectiveness and strengthen or
efficiency. terminate policy.
Regression
Discontinuity Design
(RDD is a quasi-
experimental pretest-
posttest design that
elicits the causal effects
of interventions by
assigning a
cutoff/threshold above
or below which an
intervention is
assigned. By comparing
observations lying
closely on either side of
the threshold, it is
possible to estimate the
average treatment
effect in environments
in which randomization
is unfeasible. E.G:
Scholarship award:
Since high-performing
students are more likely
to be awarded the merit
scholarship and
continue performing
well at the same time,
comparing the
outcomes of awardees
and non-recipients
would lead to an
upward bias of the
estimates. Even if the
scholarship did not
improve grades at all,
awardees would have
performed better than
non-recipients, simply
because scholarships
were given to students
who were performing
well ex ante.
POLICY EVALUATION DESIGNS

I) The experiment group


is treated with policy
program; control group
Through various doesn’t. Pretests and
experimental designs; posttests of two groups
two groups one is are used to determine
experimental/treatme whether changes, such
nt and other is control as improved/lower
group are randomly incidences and if
selected. performance of
experimental groups is
significant than control
the policy is effective,
CONTINUE …

A quasi-experiment is an empirical
interventional study used to estimate the
causal impact of an intervention on target

population without random assignment .


2) Quasi-experiment; If experimental group is not possible because
of time and cost. The random process (where the outcome is probabilistic rather
than deterministic in nature; that is, where there is uncertainty as to the result. Like: Tossing a die
– we don't know in advance what number will come) is not used, rather the treatment
group is compared with another group that is similar, like
construction of highways speed control program; record no of
fatalities and state does policy intuitive to crackdown speeding and
compare with the state did not initiated such program. Similarly
before and after study compare like water quality control program
etc
PROCESS OF EVALUATION
3:Develop the research
2:Construct Design capable of
1: Identify analytical distinguishing : goal of
the goals model (deconstructing program, observed
and the mechanisms underlying complex
physical processes, for interpreting data and range of
objectives what is
numerical) outcomes that might be
of the the policy observed if the variation
program or program is of outcomes is simply
policy expected to random (without
accomplish. method) or unaffected
by the policy.

4: Collect the
5: Analyze and data or
interpret the results actual
, do the data imply measuremen
to its actual t to describe
performance is at the
or above the goal? phenomena
of interest
STEPS IN POLICY EVALUATION

• STEP 1:
• Define Purpose and Scope •Why are you
doing the evaluation?
program outcomes?
program improvement?
STEPS IN POLICY EVALUATION

• Longitudinal (what
happens over
Step 2: extended time)

Change (what
happened as a result
Specify of a program; what
Evaluation differences are there
Design between time A and
time B)

Status (here Comparison (group A


and now; vs. group B; program
snapshot) A vs. program B)
STEPS IN POLICY EVALUATION

• Step 3: Create a Data Collection Action Plan


•How to be Collected?
• Instrumentation
• Surveys
• published instrument
• • focus group
• • observations
STEPS IN POLICY EVALUATION
• Step 4: Collect Data
• How much data do you need? – 100% of target
audience is ideal; may be too expensive and time
consuming

Random sample is a subset of a statistical population in which each member of the subset
has an equal probability of being chosen. ... An example of a simple random sample would be
the names of 25 employees being chosen out of a hat from a company of 250 employees.
STEPS IN POLICY EVALUATION

• Step 5:
• Analyze Data •Data collected during policy
evaluation are compiled and analyzed

action of staying away from school without good reason


STEPS IN POLICY EVALUATION

Step 6: • Examine • What do


Drawing results • Draw the results
Conclusions carefully conclusions signify
and and based on about your
Documentin objectively your data program.
g Findings
STEPS IN POLICY EVALUATION

You can use evaluation


findings to make
program improvements –
Consider adjustments –
STEP 7: Feedback to Re-examine/revise
Program Improvement. program strategies –
Change programs or
methodologies – Increase
time with the program
QUALITY STANDARDS FOR EVALUATION
Evaluation Within the Policy Process
Evaluation Within the Policy Process
E
ƒ valuating Policy Content: Does the content clearly articulate the goals of the
policy, its implementation and the underlying logic for why the policy will produce
intended change? Evaluating the development of a policy helps to understand the
context, content, and implementation.
E
ƒ valuating Policy Implementation: Was the policy implemented as intended? The
implementation of a policy is a critical component in understanding its effectiveness.
Evaluation of policy implementation can provide important information about the
barriers to and facilitators of implementation and a comparison between different
components or intensities of implementation.
Evaluating Policy Impact: Did the policy produce the intended outcomes and
ƒ
impact? Within injury prevention, the intended impact may be a reduction in injuries
or severity of injuries. However, it is important to evaluate short-term and
intermediate outcomes as well.
CRITERIA FOR POLICY EVALUATION

Type Of Criteria Question Illustrative Criteria


Effectiveness Has valid outcome Units of Services
been achieved?
Efficiency How much efforts Unit cost and Net
(resources) were benefits, CBR
required to achieve
valued outcome
Adequacy To what extent Cost/effectiveness
doest the for problem:
achievement of Tabulation of
valued outcome resource quantities
resolve the problem (and qualities) and
calculation of
total cost required
for addressing
problem.
CRITERIA FOR POLICY
EVALUATION (CONTD........)
Type of Criteria Question Illustrative criteria

Equity Are the cost and Pareto Criterion


benefited its Kaldor-Hicks criterion
distributed equitably (Hyperlink)
among different Rawls’s Criterion
groups? (Hyperlink)

Responsiveness Do policy outcomes Consistency With


satisfy the needs, citizen survey
preferences, or values
of particular groups?

Appropriateness Are desired outcomes Public programs


(objectives) actually should be equitable as
worthy valuable? well as efficient
Pareto It is a state of allocation of resources in which it is impossible to make any one individual better off
Criterion without making at least one individual worse off. For example, suppose there are two consumers A & B
and only one resource X. Suppose X is equal to 20. Let us assume that the resource has to be distributed
equally between A and B and thus can be distributed in the following way: (1,1), (2,2), (3,3), (4,4), (5,5),
(6,6), (7,7), (8,8), (9,9), (10,10). At point (10,10) all resources have been exhausted. No further distribution is
possible - if redistribution continues, it will lead to a position (11,9) or (9,11) that makes one better off and
the other worse off. Hence, point (10,10) is Pareto optimal; no further Pareto improvements can be made.

Kaldor– Measure of economic efficiency that captures some of the insightful appeal
Hicks of Pareto efficiency, but is more flexible and is hence applicable to more
criterion circumstances. For example in cost-benefit analysis, a project or example, a
new airport is evaluated by comparing the total costs, such as building costs
and environmental costs, with the total benefits, such as airline profits and
convenience for travelers. However, as cost-benefit analysis may also
assign different social welfare weights to different individuals, e.g. more to
the poor, the compensation criterion is not always invoked by cost-benefit
analysis.).

Rawls’s Mini-max is two-fold: seek to maximize your minimum profits, and seek to
Criterion minimize your maximum losses.
This assumes a zero-sum game (one pie, pieces cut according to each
player's control/power). - (Theory of justice as fairness)
APPROACHES TO EVALUATION
Approach Aims Major Technique
Forms
Descriptive Social Graphic display.
Pseudo-
Tabular display,
evaluation methods to experimentati Index numbers(values expressed as a percentage),
The political produce valid on, social Interrupted time series analysis (data are collected
orientation information about systems at multiple instance), control analysis (two existing
groups differing in outcome are identified and
promotes a policy outcome. accounting, compared on the basis of some supposed causal
positive or Methods involved social auditing attribute. e.g to identify factors that may contribute
negative view of to a medical condition by comparing who have that
are; quasi- research and
an object condition/disease with patients who do not have
experimental practice
regardless of what the condition/disease but are otherwise similar),
its value might design, synthesis regression discontinuity (regression discontinuity is
questionnaire, a quasi-experimental pretest-posttest design that
actually be.
elicits the causal effects of interventions by
random sampling, assigning a cutoff or threshold above or below
statistical which an intervention is assigned, e.g: remedial
technique education program restricted to students with low
school level, financial help (Head Start program) to
developing countries
restricted to poorest countries, age restriction for
social assistance or unemployment benefits etc).
APPROACHES TO EVALUATION
Approach Aims Major Technique
Forms
Formal Descriptive methods to Developmental Objective
Evaluation produce and valid evaluation, mapping(SMART),
(is a written information about policy experimental value clarification(psychotherapy
technique that can often help an
list) outcomes but evaluates such evaluation, individual increase awareness of any
outcomes on the basis of Retrospective/ values ), value critique
programs objectives that backdated (two different modes of practicing
have been formally process judgment: critique and value (or

announced by policy makers. evaluation, ),


evaluation constraint
Measures worth or value of mapping (guideline for
determining the types of development
policies/pgms. The method is and Project related activities that can
and cannot occur within particular
same as persuade but ),
areas cross-impact
difference is that Formal analysis
Evaluation use legislation, (relationships
pgms documents, and between events
interviews with policymakers would impact
and administrators define resulting events and
and specify goals reduce uncertainty in
the future),
discounting
APPROACHES TO EVALUATION
Approach Aims Major Forms Technique
Decision- Use descriptive methods to Brainstorming, argumentation
Multi- analysis, policy Delphi (is a
theoretic produces valid and reliable attribute structured communication
Evaluation information about policy
utility technique, originally developed
outcomes that are explicitly as a systematic, interactive
analysis
valued by multiple forecasting method which relies
stakeholders. The key on a panel of experts The experts
answer questionnaires in two or
difference between with more rounds. After each round, a
above attempts to surface facilitator provides an
and make explicit which is anonymous summary of the
has potential/not evident as experts’ forecasts from the
previous round as well as the
well as manifests goals and
reasons they provided for their
objectives of judgments. Thus, experts are
policymakers/administrators encouraged to revise their earlier
are but one source of value answers in light of the replies of
other members of their panel. It is
because all parties who have
believed that during this process
stake are involved in the range of the answers will
generating goals/objectives decrease and the group will
against which performance is converge towards the correct
measured. answer),
CHALLENGES IN POLICY EVALUATION
Uncertainty Over
Policy Goals
(diffused/vague Making evaluation real: Qualitative comparative
goals) analysis and environmental policy.
Difficulty in
determining
Causality (What is
the real cause of
hence difficult to Participatory systems mapping: exploring and
determine the negotiating complexity in evaluation with BEIS
variable as Flue and Defra.
doesn’t go with
medicine maybe
because of internal
body mechanism or Where do I start in navigating complex
viral cycle evaluation methods? Developing and using the
complete), similarly Choosing Appropriate Evaluation Methods Tool.
the thief may not go
for burglary
because of
punishment but
because of family Finding Dynamic Patterns in Complex Social
binding etc. Systems.

Improving theories of change to address


complexity and putting value in evaluation
CONTINUE
POLICY TERMINATION

• Dissatisfaction with cost.

• Development and opposition of political.

• Due to ideological inclination

• Usually if opposition of policy is too strong it is


altered instead of termination.
EXAMPLES OF POLICY
TERMINATED
• Vietnam War.

• Cold War

• Pakistan’s Kashmir Policy

• Pak-Afghan Policy

You might also like