Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Performance-Based Assessment Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Performance-based Assessment

Performance-based assessment uses tasks that require students to demonstrate their


knowledge, skills, and strategies by creating a response or a product (Rudner & Boston,
1994; Wiggins, 1989). Unlike a traditional standardized test in which students select one
of the responses provided, a performance assessment requires students to perform a task
or generate their own responses. For example, a performance task in writing would
require students to actually produce a piece of writing rather than answering a series of
multiple-choice questions about grammar or the structure of a paragraph. Performance
assessment is authentic when it mimics the kind of work that is done in real-world
contexts. For example, an authentic performance task in environmental science might
require a student to conduct research on the impacts of fertilizer on local groundwater and
then report the results to the public through a public service announcement or
informational brochure.

Performance-based assessment taps into students’ higher-order thinking skills, such as


evaluating the reliability of sources of information, synthesizing information to draw
conclusions, or using deductive/inductive reasoning to solve a problem. Performance
tasks may require students to make an argument with supporting evidence in English or
history or social science, conduct a controlled experiment in science, or solve a complex
problem or build a model in mathematics. Performance tasks often have more than one
acceptable solution or answer and also require students to explain their reasoning. The
format of performance assessment may range from “on-demand” kinds of tasks that can
be completed in a given amount of time (a timed writing exercise, for example) to long-
term projects that involve independent work or research outside of class.

Performance-based assessment is used for both formative and summative purposes.


When students are provided with multiple opportunities to learn and apply the skills
being measured and opportunities to revise their work, performance assessment can be
used to build students’ skills and also to inform teachers’ instructional decisions. When
combined with other kinds of assessments of student learning as part of a multiple-
measures assessment system, performance assessment can be used for summative
judgments about students’ understandings and skills in particular domains.

Advantages and Disadvantages of Performance-based Assessment


Advantages:
1. Allow teachers to pin point their students weaknesses and strengths which gives them
insight into what they did a good job of covering as well as what material may need
to be recovered and possibly in a different way to get the students to understand better
2. Can be used to assess from multiple perspective
3. Using a student-centered design that can promote student motivation
4. Can be used to assess transfer of skills and integration of content
5. Engages students in active learning
6. Encourages time on academics outside of class
7. Can provide a dimension of depth not available in the classroom
8. Can promote student creativity
9. Can be scored holistically or analytically
10. May allow probes by teacher to gain clearer picture of student understanding or
thorough processes
11. Can provide closing of feedback loop between students and teacher
12. Can place teacher more in a mentor role than as a judge
13. Can be formative or summative
14. Can provide an avenue for student self-assessment and reflection
15. Can be embedded within the courses
16. Can adapt current assignments
17. Usually the most valid way of assessing skill development

Disadvantages:
1. Usually the most costly approach
2. Time consuming and labor intensive to design and execute for faculty and students
3. Must be carefully designed if used to document obtainment of student learning
outcomes
4. Rating can be more subjective
5. Requires careful training of raters

Types of Performance-based Assessment


According to McTighe & Ferrara (1998) there are three types of performance-based
assessment. These are the products, performances, or process-oriented assessments.

A product refers to something produced by students providing concrete examples of


the application of knowledge. Examples can include brochures, reports, web pages
and audio or video clips. These are generally done outside of the classroom and based
on specific assignments (McTighe & Ferrara, 1998).

Performances allow students to show how they can apply knowledge and skills under
the direct observation of the teacher. These are generally done in the classroom since
they involve teacher observation at the time of performance. Much of the work may
be prepared outside the classroom but the students “perform” in a situation where the
teacher or others may observe the fruits of their preparation. Performances may also
be based on in-class preparation. They include oral reports, skits and role-plays,
demonstrations, and debates (McTighe & Ferrara, 1998).

Process-oriented assessments provide insight into student thinking, reasoning, and


motivation. They can provide diagnostic information on how when students are asked
to reflect on their learning and set goals to improve it. Examples are think-aloud,
self/peer assessment checklists or surveys, learning logs, and individual or pair
conferences (McTighe & Ferrara, 1998).
Developing a Performance-based Assessment
According to Standford SRN, the following steps can be used to build a performance-
based assessment system, as shown in Figure 1.

1. For each content


area/discipline, the first step is to
define the performance
outcomes, or learning targets, that
the performance tasks will assess.
The performance outcomes serve
as the foundation for the
development of the scoring rubrics
and performance tasks. To ensure
content validity, performance
outcomes are aligned with state or
national standards, college
readiness standards, and the core skills of the discipline. Teachers and other
stakeholders - in addition to content area specialists, assessment specialists, and
higher education faculty - are included in the development process to ensure that
the performance outcomes reflect the values and priorities of the users of the
assessment system.
2. Based on the performance outcomes, task parameters (or “task shell”) are defined
to ensure that the designed performance tasks will measure the desired outcomes.
Key decisions are made about task design by answering the following questions:
 What is the genre of performance that we want to measure (e.g., a scientific
inquiry, a literary analysis, or a mathematics application)?
 How will students communicate their learning (e.g., through a research paper,
a lab report, or a multimedia product)?
 Will the task require independent research, or can it be completed with
resources provided in class?
 How much choice will teachers and students have in determining the content
of the performance task?
3. Next is the development of the common scoring rubrics that will be used to assess
the student work. These rubrics are aligned with the performance outcomes and
are organized to represent key dimensions of performance. Written to reflect
students’ developmental trajectories, rubric levels make clear distinctions between
levels to facilitate reliable scoring. The common scoring rubrics are not task
specific and can be applied to evaluate any tasks that are designed to meet the
performance outcomes within disciplines.
4. Content-specific performance tasks are designed using a backward-planning tool
to ensure alignment with the performance outcomes, specific content standards, or
other learning targets. The designed tasks are vetted by content-area experts,
assessment specialists, and other stakeholders (e.g., teachers or higher education
faculty), and approved tasks are entered into the task bank. The performance tasks
are piloted, and student work samples are produced.
5. After collecting student work samples from across school sites, benchmark work
samples (those representing different levels of performance on the rubric) are
selected for training purposes. Teachers are trained to score student work with the
common scoring rubrics. A common training module and scoring procedures
maximize score reliability and comparability across schools. Trained scorers then
independently score several prescored tasks to check their ability to score reliably.
Those who pass the standard for scoring accurately are considered reliable scorers
(calibrated).
6. Teachers and other participants who have been trained to score and are calibrated
in a particular content area participate in scoring the student performance tasks in
that content area. These scores are collected and analyzed. The results inform
program review and instructional practice as well as provide the basis for further
revisions of the performance outcomes, rubrics, and tasks.
7. To check on score reliability and the comparability of scores across teachers and
schools, two strategies may be followed: An independent external audit of local
school scores may be conducted, or some percentage of student work may be
double-scored at the school site. A combination of these two methods may be
used to check score reliability within and across schools.
8. Additional research is conducted on students’ performance task scores and work
samples to evaluate the following:
 Content validity (whether these work samples truly measure state content
standards or represent college readiness skills)
 Concurrent validity (how consistent students’ performance tasks scores are
with students’ grades, SAT scores, or other standardized test scores)
 Predictive validity (how well students’ performance task scores predict
performance in college)
 Consequential validity (what students learn from completing a performance
task, or what teachers learn from implementing these tasks)

Designing a performance-based assessment may be complex using the system suggested


by Standford SRN but Wiggins and McTighe’s GRASPS model is an excellent starting
point. GRASPS is an acronym for:
GRASPS Model Description
Goal state the goal, problem, challenge, or obstacle to be
resolved in the task (This should be consistent to the
intended learning outcome/objective)
Role define the role of the students in the task and what they
are being asked to do

Audience Identify the target audience within the context of the


scenario (Remember, the audience is not limited to the
instructor)
Situation Provide and explain the context of the situation and any
additional factors that could impede the resolution of the
problem
Product, explain the product or performance that needs to be
Performance, created and its larger purpose.
and Purpose
Standards and dictate the standards that must be met and how the work
Criteria for will be judged by the target audience
Success

Congratulations! You finally learned the essential concept of performance-based


assessment.

To demonstrate the GRASP model in designing a performance-based assessment, kindly


observe the next table.

Intended Learning Outcomes: The students are expected to design a lesson plan
utilizing the Gagne’s Nine Events of Instruction.
GRASPS
Description Example
Model
Goal state the goal, problem, The goal is to design a
challenge, or obstacle to lesson plan utilizing the
be resolved in the task Gagne’s Nine Events of
(This should be consistent Instruction.
to the intended learning
outcome/objective)
Role define the role of the The students are the lesson
students in the task and planners/designers.
what they are being asked
to do
Audience Identify the target The target audience is the
audience within the teacher and the students in
context of the scenario the class.
(Remember, the audience
is not limited to the
instructor)
Situation Provide and explain the The situation is the students
context of the situation are assuming the role of the
and any additional factors teacher as lesson
that could impede the planner/designer. They need
resolution of the problem to use Gagne’s Nine Events
of Instruction effectively
deliver a lesson in their
respective classes.
Product, explain the product or Students need to write a
Performance, performance that needs to lesson plan following the
and Purpose be created and its larger stages of instruction as
purpose. prescribed in the Gagne’s
Nine Events of Instruction.
These stages are: (1)
Gaining the attention of the
students, (2) Informing the
learner of the objective, (3)
Stimulating recall of prior
learning, (4) Presenting the
content, (5) Providing
learning guidance, (6)
Eliciting the performance,
(7) Providing feedback, (8)
Assessing the performance,
and (9) Enhancing retention
and transfer.

The students’ knowledge


and skills on lesson
planning and the use of
Gagne’s Nine Events of
Instruction will be
demonstrated through the
lesson plan that they have
written.
Standards and dictate the standards that A rubric for lesson planning
Criteria for must be met and how the will be utilized to score and
Success work will be judged by evaluate the lesson plan.
the target audience Sixty percent (60%) of the
highest possible score is
considered the minimum
passing score.

Now, it is your turn to design your performance-based assessment.

You might also like