Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 2 - Lesson 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Chapter 2: Lesson 4

CURRICULUM EVALUATION
Curriculum has been defined as the sum total of all the learning experiences, both planned and
unplanned, that are accorded to the learners under the guidance of an educational institution, that is
tasked with developing the learners to become productive and effective contributing members of the
society. Considering the preceding statement as a curriculum perspective, curriculum evaluation
therefore, may be viewed in the light of two basic issues:
1. Are the learning experiences spelled out in terms of programs, projects and activities as planned
and organized actually produce the intended results?
2. How can these learning experiences best improved in the context of curriculum intentions that
meet the desired development among learners?

Curriculum Evaluation
Refers to the formal process of determining the quality, effectiveness, or value of a curriculum
(Stufflebeam, 1991). It involves value judgement about the curriculum, both in terms of its process and
its products. Evaluating the curriculum also involves the process of delineating, obtaining and
providing information for judging decisions and alternatives including value judgment to a set of
experiences selected for educational purposes.

Process Evaluation
Used if the intent of the curriculum evaluation falls on any of the following:
a. To provide empirical data and information that may be possibly determine the extent to
which plans for curriculum implementation have been executed and resources were used
wisely
b. To provide assistance necessary for changing or clarifying implementation plans
c. To assess the degree to which curriculum implementers have carried out their roles and
responsibilities in the implementation process

Product Evaluation
Used if the intent of curriculum evaluation is to gather, interpret and appraise curricular
attainments, as often as necessary, in order to determine how well the curriculum, meet the needs of
the students it is intended to serve.

Purposes of Curriculum Evaluation


Patton (1990) stresses the imperativeness of curriculum evaluation as an important
mechanism for monitoring and getting feedback about a particular curricular program as to whether
or not it is running effectively and from there determine the kind of intervention needed before
evaluating the desired outcomes of the program that was or is being implemented. In this case,
evaluators need to know what produced the observed results so that they can decide on the kind of
intervention that ought to be done for the purpose of improving the program under consideration,
hence curriculum evaluation actually becomes a source of information as regards the details of what
has been going on with the program, how the program progresses, and how and why such program
might have or might not have deviated from the objectives as formulated and planned.

Three questions may be considered when evaluating a curriculum:


1. Why evaluate the curriculum?
2. What of the curriculum is going to be evaluated?
3. How is evaluation going to be done?

It is important that these three issues be addressed properly to establish and provide a logical
framework in dealing with evaluation activities, and also to specifically set directions and parameters
in terms of the areas of curriculum that would be subjected to evaluation.

Why Evaluate?
Curriculum needs to be evaluated in order to determine if it meets the current demands of
educational reforms that have been made. Results of evaluation would provide education authorities
to make the necessary adjustments or improvements in case of possible gaps that may exist between
the curriculum being implemented and the identified educational requirements. Curriculum evaluation
particularly, its result may provide direction, security and feedbacks to all concerned, more specifically,
the curriculum developers themselves and concerned educationists including the school heads and
teachers.

Cronbach (1963) distinguishes three (3) types of decisions for which evaluation is utilized:
1. Course improvement
Pertains to decisions as to what instructional methods and materials meet the
objectives of the curriculum and where changes are needed;

2. Decision about individuals


Concerns in identifying the needs of the learners vis-à-vis, planning for instruction and
grouping, and making the learners become aware and conscious of their own deficiencies; and

3. Administrative regulations
Focuses on judging how good the school system and how good individual teachers are.
The goal of evaluation should include answers to questions about selection, adoption, support
and worth of educational materials and activities.

What to Evaluate?
Evaluation may be undertaken to gather data and relevant information that would assist
educators in deciding whether to accept, change or eliminate the curriculum in general or an
educational material in particular. Objects or subjects for evaluation may be the whole curriculum itself
or its specific aspects such as goals, objectives, content, methodology or even the results or outcomes.
The different phases or stages in curriculum development namely: planning, organization,
implementation, and evaluation may also be the focus of evaluation, data from which would serve as
significant input to improve the conduct of such processes the next time around.

1. Goals and Objectives


The very foundation to which a curricular program or any educational program is
developed is clearly spelled out in its goals and objectives. All the processes and the
mechanisms needed in designing a curricular or educational program are based on these goals
and objectives, hence, they have to be evaluated, primarily to determine whether these goals
and objectives are worthwhile bases used in developing the program and if they are achievable
the result in the desired outcomes.

2. Content and methodology


Content of the developed curriculum or any educational program need to be examined
and evaluated in order to determine whether they relate with the needs of the learners for
whom the curriculum was developed, and also to establish the congruency between the
methodology and the curriculum objectives as well as determine the appropriateness of the
content.

3. Outcomes/Results
The evaluation of outcomes or results goes hand and hand with the evaluation of
objectives, content and methodology. These outcomes or results serve as the ultimate
measure of how successful or effective the curriculum has been in achieving its goals and
objectives. Outcome evaluation is conducted to draw out information and data that can be
used in improving the curriculum as a whole.

Forms of Evaluation
1. Formative Evaluation
The process of looking for evidence of success or failure of a curriculum program, a
syllabus or a subject taught during implementation intended to improve a program. It is done
at the same time that the program is ongoing throughout its duration. Determining the
success or failure of curriculum constitutes the major concern of curriculum evaluation. If
curriculum is not successful as it is being implemented, then it is important to act accordingly
to prevent and/or avoid failure in the future.

As the term implies, formative evaluation involves the date gathered during the
formation or implementation of the curriculum which may serve as important bases in revising
the curricular program being implemented. In education, the aim of formative evaluation is
usually to obtain information to improve a particular program.

2. Summative Evaluation
The form of evaluation used at the end of the implementation of a program. It us used
to asses whether or not the program or project or even an activity really performed according
to how they were originally designed or developed. As the term implies, summative evaluation
involves gathering of needed data usually collected at the end of the implementation of the
curriculum program.

Curriculum Evaluation Models


Determining the extent to which a particular program or activity or any undertaking for that
matter requires scientific procedures in order to come up with something equally scientific which is
highly important id dealing with the veracity, validity and reliability of results.

On Models of Evaluation
Model represents an exemplar or an ideal pattern of something that may be worthy of
imitation or to be regarded as a guide for people to follow in the event of using, adopting or
implementing a particular program or activity.

A. Tyler’s Objectives-Centered Model


Tyler’s Objectives-Centered Model (1950) can be best described in terms of the rational
and systematic movement of evaluation procedures looking at the several related steps as
indicated below:
1. Begin with the behavioral objectives that have been previously determined. Those
objectives should specify both the content of learning and the student behavior
expected: “Demonstrate familiarity with dependable sources of information on
the questions relating to nutrition”.
2. Identify the situations that will give the student the opportunity to express the
behavior embodied in the objective and that evoke or encourage such behavior.
Thus, if you wish to assess oral language use, identify situations that evoke oral
language.
3. Select, modify, or construct suitable evaluation instruments, and check the
instruments for objectivity, reliability, and validity.
4. Use the instruments to obtain summarized or appraised results.
5. Compare the results obtained from several instruments before and after given
periods in order to estimate the amount of change taking place.
6. Analyze the results in order to determine strengths and weakness of the
curriculum and to identify possible explanations about the reason for this
particular pattern of strengths and weaknesses.
7. Use the results to make the necessary modification in the curriculum.

Basically, described as rational and systematic, Tyler’s Objectives-Centered Model in evaluating


curriculum has been found to be advantageous as it is relatively easy to understand and apply.

B. Stufflebeam’s Context, Input, Process and Product Model


Developed by Phi Delta Kappa chaired by Daniel Stufflebeam (1971), this model, which
accordingly seemed to appeal educational leaders, emphasizes the importance of producing
evaluative data that can be used for decision making, since the view of the Phi Delta
Committee that worked on the model was that decision making is the sole justification and
rational for conducting an evaluation.

Braden (1992) posits that this model can be used for both formative and summative
kinds of evaluation activity. This has been evident in his statement when he said, “Perhaps, the
most significant characteristics of CIPP is that it makes provision for holistic evaluation. Its
elements are systems oriented, structured to accommodate universal evaluation needs.”

To respond more effectively to the needs of decision makers, this Stufflebeam model
provides a means for generating data relative to the four phases of program evaluation:
1. Context Evaluation
Intended to continuously assess needs and problems in context in order to
help decision makers determine goals and objectives. To serve planning decision,
this element of CIPP model “is intended to describe the status or context or setting
so as to identify, the unmet needs, potential opportunities, problems, or program
objectives that will be evaluated.

2. Input Evaluation
Used in assessing alternative means for achieving those goals and objectives
in order to help decision makers choose optimal means. To serve structuring
decisions, this element is intended for evaluators to provide information that could
help decision makers in selecting procedures and resources for the purpose of
designing or choosing appropriate methods and materials.

3. Process Evaluation
To monitor the processes, both to ensure that the means are actually
implemented, and to make necessary modifications, is the main task of this
element of CIPP model. It serves in implementing decision, as it makes sure that
the program is going as intended, identifies defects or strengths in the procedures.

4. Product Evaluation
This is use to compare actual ends with intended or desired ends, eventually
leading to a series of modifying and/or recycling decisions. It serves in recycling
decisions, where there is a combination of progress and outcome evaluation
stages that serves in determining and judging program attainments.
Glatthorn (1987) points out that all throughout the four stages of the model, the following
specific steps are undertaken:
1. Identify the kinds of decision
2. Identify the kinds of data needed to make those decisions
3. Collect those date needed
4. Establish criteria for determining quality
5. Analyze data collected on the bases of established criteria
6. Provide needed information to decision makers explicitly

In totality, the CIPP model looks at evaluation both in terms of processes and products or
outcomes not only at the conclusion of the program but also at various phases or stages of program
implementation.

C. Stake’s Responsive Model


Developed by Robert Stake (1973), this evaluation model gives more emphasis on a full
description of the evaluation program as well as the evaluation process itself. Stake believes
that the concerns of the stakeholders for whom the evaluation is done, should be primordial
is determining all sorts of issues surrounding evaluation process itself. This model is an
approach that trades off some measurement precision in order to make the findings more
useful to persons involved with the program.
Three (3) elements identified in this model:
1. Antecedents – refers to the conditions existing prior to intervention;
2. Transaction – which pertains to events or experiences that constitute the program;
3. Outcomes – which are the effects of the program

Two (2) special aspects:


1. The distinction between standards and observation;
2. The difference between standards and judgement about what effects occurred

According to Worthen and Sanders (1987), as mentioned by Ogle (2002), the evaluator would
use this model, following these steps:
1. Provide background, justification, and description of the program rationale
2. List intended antecedents, transactions, and outcomes
3. Record observed antecedents, transactions, and outcomes
4. Explicitly state standards (criteria, expectations, performance of comparable programs)
for judging program antecedents, transactions and outcomes
5. Record judgements made about the antecedent conditions, and outcomes

Stake himself as cited in Glatthorn (1987; pp.275-276) recommends the following steps in
employing his model which he considers as an interactive and recursive evaluation process:
1. The evaluator meets with clients, staff, and audiences to gain a sense of their perspectives
on and intentions regarding the evaluation
2. The evaluator draws on such discussions and the analysis of any documents to determine
the scope of the evaluation project
3. The evaluator observes the program closely to get a sense of its operation and to note any
unintended deviations from announced intents
4. The evaluator discovers the stated and real purposes of the project and the concerns that
various audiences have about it and the evaluation
5. The evaluator identifies the issues and problems with which the evaluation should be
concerned. For each issue and problem, the evaluator develops an evaluation design,
specifying the kinds of date needed
6. The evaluator selects the means needed to acquire the data desired. Most often, the
means will be human observers or judges
7. The evaluator implements the data collection procedures
8. The evaluator organizes the information into themes and prepares “portrayals” that
communicate in natural ways the thematic reports. The portrayals may involve videotapes,
artifacts, case studies, or other “faithful representations.”
9. By again being sensitive to the concerns of the stakeholders, the evaluator decides which
audiences require which reports and chooses formats most appropriate for given
audiences

Evidently, the main advantage of this responsive model is its being sensitive to clients or
stakeholders, in particular to their concerns and their values.

D. Eistner’s Connoissuership Model


Developed by Elliot Eisner (1979) through his background in aesthetics and education,
this model is an approach to evaluation that gives emphasis to qualitative appreciation.

Understanding Eisner’s model requires understanding the two related concepts on


which the model is built:
1. Connoisseurship – the “art of appreciation”- recognizing through perceptual
memory, drawing from the experience to appreciate what is significant. It is the
ability to perceive the particulars of educational life and to understand how these
particulars of educational life and to understand how these particulars form part
of a classroom structure.
2. Criticism – “is the art of disclosing qualities of an entity that cannoisseurship
perceives.” Evaluation, in Eisner’s perspective, may be regarded as an educational
criticism consisting of three aspects:
a. the descriptive aspect which involves an act of characterizing and portraying
the “relevant qualities of educational life”;
b. the interpretive aspect, which “uses ideas from the social sciences to explore
meanings and develop alternative explanation” to explain social events and
situations;
c. the evaluative aspect where judgments are made in an art effort to improve
the educational processes, thus, providing solid bases for making value choices
in order for others to present a better argument.

An Eclectic Approach to Curriculum Evaluation


The models presented earlier clearly present their respective distinct features that separate
one model from the other. Significantly, each one of those models offers a means to determine the
effectiveness of a particular curricular program looking at its essential components and its impact to
the development of the intended clients, the learners in the case of an educational institution.

Criteria for Curriculum Evaluation Model


• can be implemented without making inordinate demands upon district resources
• can be applied to all levels of curriculum-programs of study, fields of study, courses of study
• make provisions for assessing all significant aspects of curriculum-the written, the taught, the
supported, the tested, and the learned curricula
• makes useful distinctions between merit (intrinsic value) and worth (value for a given context)
• Is responsive to the special concerns of district stakeholders and is able to provide them with
the data they need for decision making
• Is goal oriented, emphasizing objectives and outcomes
• Is sensitive to and makes appropriate provisions for assessing unintended effects
• Pay due attention to an makes provisions for assessing the special context for the curriculum
• Is sensitive to and makes provisions for assessing the aesthetic or qualitative aspects of the
curriculum
• Makes provision for assessing opportunity cost-the opportunities lost by those studying this
curriculum
• Uses both quantitative and qualitative methods for gathering and analyzing data
• Presents findings in reports responsive to the special needs of several audiences

Tools, Methods and Techniques for Evaluation


1. Questionnaires and Checklist
These techniques are oftentimes use when you want to easily or quickly get as much
data and information from your respondents or participants in the evaluation.
Questionnaire consists of series of questions usually followed by three to five
indicators with corresponding numerical equivalent to facilitate analysis and interpretation of
responses
Checklist, on the other hand usually contains a list of variables or characteristics about
a particular situation or event where respondents are usually asked to just put a check mark
opposite the variable that corresponds to the kind of data being sought in the checklist.
2. Interviews
This is believed to be a very good way of assessing people’s perceptions, meanings,
definitions of situations and constructions of reality (Punch, 2005). They are usually done on a
one-on-one of face-to-face situations in which an individual asks questions to which a second
individual answers or responds.

3. Observations
This technique makes use of actually viewing events, episodes, phenomena involved
in the program as they are actually occurring. This is used to gather exact, precise and accurate
data and information on how a program actually operates, more particularly those pertaining
to processes.

4. Documentary review and analysis


To get impressions of how a particular program of curriculum operates without
necessary interrupting the program, an evaluator can employ the technique of documentary
review and analysis. Actual conduct of review and analysis of existing and available documents
such as memoranda, circulars, orders, minutes, procedures and the like may provide
evaluators with a comprehensive and even historical information about the implementation
of the program.

You might also like