Chapter 2 - Lesson 4
Chapter 2 - Lesson 4
Chapter 2 - Lesson 4
CURRICULUM EVALUATION
Curriculum has been defined as the sum total of all the learning experiences, both planned and
unplanned, that are accorded to the learners under the guidance of an educational institution, that is
tasked with developing the learners to become productive and effective contributing members of the
society. Considering the preceding statement as a curriculum perspective, curriculum evaluation
therefore, may be viewed in the light of two basic issues:
1. Are the learning experiences spelled out in terms of programs, projects and activities as planned
and organized actually produce the intended results?
2. How can these learning experiences best improved in the context of curriculum intentions that
meet the desired development among learners?
Curriculum Evaluation
Refers to the formal process of determining the quality, effectiveness, or value of a curriculum
(Stufflebeam, 1991). It involves value judgement about the curriculum, both in terms of its process and
its products. Evaluating the curriculum also involves the process of delineating, obtaining and
providing information for judging decisions and alternatives including value judgment to a set of
experiences selected for educational purposes.
Process Evaluation
Used if the intent of the curriculum evaluation falls on any of the following:
a. To provide empirical data and information that may be possibly determine the extent to
which plans for curriculum implementation have been executed and resources were used
wisely
b. To provide assistance necessary for changing or clarifying implementation plans
c. To assess the degree to which curriculum implementers have carried out their roles and
responsibilities in the implementation process
Product Evaluation
Used if the intent of curriculum evaluation is to gather, interpret and appraise curricular
attainments, as often as necessary, in order to determine how well the curriculum, meet the needs of
the students it is intended to serve.
It is important that these three issues be addressed properly to establish and provide a logical
framework in dealing with evaluation activities, and also to specifically set directions and parameters
in terms of the areas of curriculum that would be subjected to evaluation.
Why Evaluate?
Curriculum needs to be evaluated in order to determine if it meets the current demands of
educational reforms that have been made. Results of evaluation would provide education authorities
to make the necessary adjustments or improvements in case of possible gaps that may exist between
the curriculum being implemented and the identified educational requirements. Curriculum evaluation
particularly, its result may provide direction, security and feedbacks to all concerned, more specifically,
the curriculum developers themselves and concerned educationists including the school heads and
teachers.
Cronbach (1963) distinguishes three (3) types of decisions for which evaluation is utilized:
1. Course improvement
Pertains to decisions as to what instructional methods and materials meet the
objectives of the curriculum and where changes are needed;
3. Administrative regulations
Focuses on judging how good the school system and how good individual teachers are.
The goal of evaluation should include answers to questions about selection, adoption, support
and worth of educational materials and activities.
What to Evaluate?
Evaluation may be undertaken to gather data and relevant information that would assist
educators in deciding whether to accept, change or eliminate the curriculum in general or an
educational material in particular. Objects or subjects for evaluation may be the whole curriculum itself
or its specific aspects such as goals, objectives, content, methodology or even the results or outcomes.
The different phases or stages in curriculum development namely: planning, organization,
implementation, and evaluation may also be the focus of evaluation, data from which would serve as
significant input to improve the conduct of such processes the next time around.
3. Outcomes/Results
The evaluation of outcomes or results goes hand and hand with the evaluation of
objectives, content and methodology. These outcomes or results serve as the ultimate
measure of how successful or effective the curriculum has been in achieving its goals and
objectives. Outcome evaluation is conducted to draw out information and data that can be
used in improving the curriculum as a whole.
Forms of Evaluation
1. Formative Evaluation
The process of looking for evidence of success or failure of a curriculum program, a
syllabus or a subject taught during implementation intended to improve a program. It is done
at the same time that the program is ongoing throughout its duration. Determining the
success or failure of curriculum constitutes the major concern of curriculum evaluation. If
curriculum is not successful as it is being implemented, then it is important to act accordingly
to prevent and/or avoid failure in the future.
As the term implies, formative evaluation involves the date gathered during the
formation or implementation of the curriculum which may serve as important bases in revising
the curricular program being implemented. In education, the aim of formative evaluation is
usually to obtain information to improve a particular program.
2. Summative Evaluation
The form of evaluation used at the end of the implementation of a program. It us used
to asses whether or not the program or project or even an activity really performed according
to how they were originally designed or developed. As the term implies, summative evaluation
involves gathering of needed data usually collected at the end of the implementation of the
curriculum program.
On Models of Evaluation
Model represents an exemplar or an ideal pattern of something that may be worthy of
imitation or to be regarded as a guide for people to follow in the event of using, adopting or
implementing a particular program or activity.
Braden (1992) posits that this model can be used for both formative and summative
kinds of evaluation activity. This has been evident in his statement when he said, “Perhaps, the
most significant characteristics of CIPP is that it makes provision for holistic evaluation. Its
elements are systems oriented, structured to accommodate universal evaluation needs.”
To respond more effectively to the needs of decision makers, this Stufflebeam model
provides a means for generating data relative to the four phases of program evaluation:
1. Context Evaluation
Intended to continuously assess needs and problems in context in order to
help decision makers determine goals and objectives. To serve planning decision,
this element of CIPP model “is intended to describe the status or context or setting
so as to identify, the unmet needs, potential opportunities, problems, or program
objectives that will be evaluated.
2. Input Evaluation
Used in assessing alternative means for achieving those goals and objectives
in order to help decision makers choose optimal means. To serve structuring
decisions, this element is intended for evaluators to provide information that could
help decision makers in selecting procedures and resources for the purpose of
designing or choosing appropriate methods and materials.
3. Process Evaluation
To monitor the processes, both to ensure that the means are actually
implemented, and to make necessary modifications, is the main task of this
element of CIPP model. It serves in implementing decision, as it makes sure that
the program is going as intended, identifies defects or strengths in the procedures.
4. Product Evaluation
This is use to compare actual ends with intended or desired ends, eventually
leading to a series of modifying and/or recycling decisions. It serves in recycling
decisions, where there is a combination of progress and outcome evaluation
stages that serves in determining and judging program attainments.
Glatthorn (1987) points out that all throughout the four stages of the model, the following
specific steps are undertaken:
1. Identify the kinds of decision
2. Identify the kinds of data needed to make those decisions
3. Collect those date needed
4. Establish criteria for determining quality
5. Analyze data collected on the bases of established criteria
6. Provide needed information to decision makers explicitly
In totality, the CIPP model looks at evaluation both in terms of processes and products or
outcomes not only at the conclusion of the program but also at various phases or stages of program
implementation.
According to Worthen and Sanders (1987), as mentioned by Ogle (2002), the evaluator would
use this model, following these steps:
1. Provide background, justification, and description of the program rationale
2. List intended antecedents, transactions, and outcomes
3. Record observed antecedents, transactions, and outcomes
4. Explicitly state standards (criteria, expectations, performance of comparable programs)
for judging program antecedents, transactions and outcomes
5. Record judgements made about the antecedent conditions, and outcomes
Stake himself as cited in Glatthorn (1987; pp.275-276) recommends the following steps in
employing his model which he considers as an interactive and recursive evaluation process:
1. The evaluator meets with clients, staff, and audiences to gain a sense of their perspectives
on and intentions regarding the evaluation
2. The evaluator draws on such discussions and the analysis of any documents to determine
the scope of the evaluation project
3. The evaluator observes the program closely to get a sense of its operation and to note any
unintended deviations from announced intents
4. The evaluator discovers the stated and real purposes of the project and the concerns that
various audiences have about it and the evaluation
5. The evaluator identifies the issues and problems with which the evaluation should be
concerned. For each issue and problem, the evaluator develops an evaluation design,
specifying the kinds of date needed
6. The evaluator selects the means needed to acquire the data desired. Most often, the
means will be human observers or judges
7. The evaluator implements the data collection procedures
8. The evaluator organizes the information into themes and prepares “portrayals” that
communicate in natural ways the thematic reports. The portrayals may involve videotapes,
artifacts, case studies, or other “faithful representations.”
9. By again being sensitive to the concerns of the stakeholders, the evaluator decides which
audiences require which reports and chooses formats most appropriate for given
audiences
Evidently, the main advantage of this responsive model is its being sensitive to clients or
stakeholders, in particular to their concerns and their values.
3. Observations
This technique makes use of actually viewing events, episodes, phenomena involved
in the program as they are actually occurring. This is used to gather exact, precise and accurate
data and information on how a program actually operates, more particularly those pertaining
to processes.