Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Analysis of Assessment Results

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Analysis of Assessment Results

Analyzing and Reporting Assessment Results


An assessment plan's value to the department lies in the evidence it offers about
overall department or program strengths and weaknesses, and in the evidence it
provides for change (Wright, 1991). The key factors in achieving real value from all of
your work is to make the most out of the information you have collected by using
effective analysis and interpretation practices.
The Best Ways to Analyze and Interpret Assessment Information
 Present the data in relation to the program’s identified goals and objectives
 Use qualitative and quantitative methods to present a well-balanced picture of the
assessment goals and driving questions
 Vary your analysis and reporting procedures according to identified audiences (accreditors,
campus report etc)
 Develop recommendations based on the analysis of data and using identified goals as a
framework within which to accomplish suggested changes
Consider the extent to which your findings can help you answer the following
questions:
 What does the data say about students' mastery of subject matter, research skills, or
writing?
 What does it say about meeting benchmark expectations?
 What does the data say about your students' preparation for taking the next step in their
careers?
 Are graduates of your program getting good jobs, accepted into reputable graduate
schools?
 Are there areas where your students are outstanding?
 Do you see weakness in any particular skills, such as research or critical thinking skills?
 
These are compelling questions for faculty, administrators, students, and external
audiences alike. If your assessment information can shed light on these issues, the
value of your efforts will become all the more apparent.

Remember that data can often be misleading, and even threatening, when used for
purposes other than originally intended and agreed upon. For example, data collected
from the assessment of student performance in a capstone course should be used to
identify areas of strengths and weaknesses in student learning “across the students'
entire experience in the major”. In this way, the data can guide curricular
modifications and departmental pedagogical strategies. The data should NOT be used
to evaluate the performance of the capstone course instructor.
Preparing Effective Assessment Plans & Reports
At its most basic, your report should have enough information to answer five basic
questions:
 What did you do?
 Why did you do it?
 What did you find?
 How will you use it?
 What is your evaluation of the assessment itself?
Format of the Assessment Plans and Reports
A comprehensive program assessment plan and report could be as simple as a
presentation to departments on the major results or it could be a detailed report to
the Provost on assessing learning outcomes in the program. The reality is that a
program rarely has only one purpose for engaging in assessment. Therefore, you may
want to develop reports that are tailored specifically to the audiences you need to
address.

Formal Reports
If you have decided to prepare a formal assessment report, your report should address
each of the identified audiences and might contain some or all of the following:
 A brief description of why the assessment activity was undertaken
 A brief description of the major, goals, objectives and intended learning outcomes
 An explanation of how the analysis was done and what methodology was used
 A presentation of major findings
 A discussion of how results are being used for program improvement
 An evaluation of the assessment plan/process itself
 An outline of next steps (programmatic, curricular, and assessment-related)
 An appendix containing a curriculum analysis matrix, relevant assignments and outcomes,
data collection methods, and other information or materials as appropriate
Assessment reports do not necessarily have to be pages and pages of text and graphs
to be effective. You may choose to prepare a report that briefly and succinctly
outlines your assessment program results. By highlighting the main points and
significant results, you can convey in a concise manner what you were trying to
accomplish, what you did and did not accomplish, and what changes you will
implement as a result.
https://academicprograms.calpoly.edu/analysis-assessment-results
Purposes of assessment
Teaching and learning
The primary purpose of assessment is to improve students’ learning and teachers’ teaching as both respond to the

information it provides. Assessment for learning is an ongoing process that arises out of the interaction between teaching

and learning.

What makes assessment for learning effective is how well the information is used.

System improvement
Assessment can do more than simply diagnose and identify students’ learning needs; it can be used to assist improvements

across the education system in a cycle of continuous improvement.

 Students and teachers can use the information gained from assessment to determine their next teaching and learning

steps.

 Parents, families and whānau can be kept informed of next plans for teaching and learning and the progress being

made, so they can play an active role in their children’s learning.

 School leaders can use the information for school-wide planning, to support their teachers and determine professional

development needs.

 Communities and Boards of Trustees can use assessment information to assist their governance role and their decisions

about staffing and resourcing.

 The Education Review Office can use assessment information to inform their advice for school improvement.

 The Ministry of Education can use assessment information to undertake policy review and development at a national

level, so that government funding and policy intervention is targeted appropriately to support improved student

outcomes.

RETURN TOTOP
https://assessment.tki.org.nz/Assessment-for-learning/Underlying-principles-of-assessment-for-
learning/Purposes-of-assessment

Assessment tasks are developed from theses descriptions and teachers use them both to identify
how far students have progressed on various dimensions and also to report their progress to
others and for instruction planning purposes.
There is some tension between the formative and summative purposes of testing, even in these models. For example,
Wise noted, having students score their own work and having teachers score their students’ work is an advantage in
formative assessment but a disadvantage in summative assessment. These models could be devised to permit some of
both, thus allowing teachers to see how closely their formative results map onto summative results. For example, each of
the tests administered throughout the year could include both summative and formative portions, some of which might
be scored by the teacher and some of which might be scored externally.

Wise also considered the way alternate models of testing might be used to provide information about teacher
effectiveness. In the current school-level accountability system, teachers are highly motivated to work together to make
sure that all students in the school succeed because the school is evaluated on the basis of how all students do. If, instead,
data were aggregated by teacher, there might be a perverse incentive for greater competition among teachers—which is
not likely to be good for students. One solution would be to incorporate other kinds of information about teachers, as is
done in many other employment settings. For example, teacher ratings could include not only student achievement data,
but also principal and peer ratings on such factors as contributions to the school as a whole and the learning environment,
innovations, and so forth. Such a system might also be used diagnostically, to help identify areas in which teachers need
additional support and development.
Wise had three general recommendations for assessment:
1. Assessments should closely follow, but also lead, the design of instruction.
2. Assessments should provide timely, actionable information, as well as summative information needed for
evaluation.
3. Aggregation of summative information should support the validity of intended interpretations. With a complex
system—as opposed to a single assessment—it is possible to meet multiple purposes in a valid manner.
“There is potentially great value to a more continuous assessment system incorporating different types of measures—
even to the extremes of portfolios and group tasks,” Wise concluded, and the flexibility such an approach offers for
meeting a variety of goals is perhaps its greatest virtue.”
How Classroom Assessments Improve Learning
Thomas R. Guskey
Teachers who develop useful assessments, provide corrective instruction, and give students
second chances to demonstrate success can improve their instruction and help students learn.
Large-scale assessments, like all assessments, are designed for a specific purpose. Those used in most
states today are designed to rank-order schools and students for the purposes of accountability—and
some do so fairly well. But assessments designed for ranking are generally not good instruments for
helping teachers improve their instruction or modify their approach to individual students. First, students
take them at the end of the school year, when most instructional activities are near completion. Second,
teachers don't receive the results until two or three months later, by which time their students have
usually moved on to other teachers. And third, the results that teachers receive usually lack the level of
detail needed to target specific improvements (Barton, 2002; Kifer, 2001).
The assessments best suited to guide improvements in student learning are the quizzes, tests, writing
assignments, and other assessments that teachers administer on a regular basis in their classrooms.
Teachers trust the results from these assessments because of their direct relation to classroom
instructional goals. Plus, results are immediate and easy to analyze at the individual student level. To use
classroom assessments to make improvements, however, teachers must change both their view of
assessments and their interpretation of results. Specifically, they need to see their assessments as an
integral part of the instruction process and as crucial for helping students learn.
Despite the importance of assessments in education today, few teachers receive much formal training in
assessment design or analysis. A recent survey showed, for example, that fewer than half the states
require competence in assessment for licensure as a teacher (Stiggins, 1999). Lacking specific training,
teachers rely heavily on the assessments offered by the publisher of their textbooks or instructional
materials. When no suitable assessments are available, teachers construct their own in a haphazard
fashion, with questions and essay prompts similar to the ones that their teachers used. They treat
assessments as evaluation devices to administer when instructional activities are completed and to use
primarily for assigning students' grades.
To use assessments to improve instruction and student learning, teachers need to change their approach
to assessments in three important ways.
Make Assessments Useful
For Students
Nearly every student has suffered the experience of spending hours preparing for a major assessment,
only to discover that the material that he or she had studied was different from what the teacher chose to
emphasize on the assessment. This experience teaches students two un-fortunate lessons. First,
students realize that hard work and effort don't pay off in school because the time and effort that they
spent studying had little or no influence on the results. And second, they learn that they cannot trust their
teachers (Guskey, 2000a). These are hardly the lessons that responsible teachers want their students to
learn.
Nonetheless, this experience is common because many teachers still mistakenly believe that they must
keep their assessments secret. As a result, students come to regard assessments as guessing games,
especially from the middle grades on. They view success as depending on how well they can guess what
their teachers will ask on quizzes, tests, and other assessments. Some teachers even take pride in their
ability to out-guess students. They ask questions about isolated concepts or obscure understandings just
to see whether students are reading carefully. Generally, these teachers don't include such “gotcha”
questions maliciously, but rather—often unconsciously—because such questions were asked of them
when they were students.
Classroom assessments that serve as meaningful sources of information don't surprise students. Instead,
these assessments reflect the concepts and skills that the teacher emphasized in class, along with the
teacher's clear criteria for judging students' performance. These concepts, skills, and criteria align with the
teacher's instructional activities and, ideally, with state or district standards. Students see these
assessments as fair measures of important learning goals. Teachers facilitate learning by providing
students with important feedback on their learning progress and by helping them identify learning
problems (Bloom, Madaus, & Hastings, 1981; Stiggins, 2002).
Critics sometimes contend that this approach means “teaching to the test.” But the crucial issue is, What
determines the content and methods of teaching? If the test is the primary determinant of what teachers
teach and how they teach it, then we are indeed “teaching to the test.” But if desired learning goals are
the foundation of students' instructional experiences, then assessments of student learning are simply
extensions of those same goals. Instead of “teaching to the test,” teachers are more accurately “testing
what they teach.” If a concept or skill is important enough to assess, then it should be important enough to
teach. And if it is not important enough to teach, then there's little justification for assessing it.
For Teachers
The best classroom assessments also serve as meaningful sources of information for teachers, helping
them identify what they taught well and what they need to work on. Gathering this vital information does
not require a sophisticated statistical analysis of assessment results. Teachers need only make a simple
tally of how many students missed each assessment item or failed to meet a specific criterion. State
assessments sometimes provide similar item-by-item information, but concerns about item security and
the cost of developing new items each year usually make assessment developers reluctant to offer such
detailed information. Once teachers have made specific tallies, they can pay special attention to the
trouble spots—those items or criteria missed by large numbers of students in the class.
In reviewing these results, the teacher must first consider the quality of the item or criterion. Perhaps the
question is ambiguously worded or the criterion is unclear. Perhaps students mis-interpreted the question.
Whatever the case, teachers must determine whether these items adequately address the knowledge,
understanding, or skill that they were intended to measure.
If teachers find no obvious problems with the item or criterion, then they must turn their attention to their
teaching. When as many as half the students in a class answer a clear question incorrectly or fail to meet
a particular criterion, it's not a student learning problem—it's a teaching problem. Whatever teaching
strategy was used, whatever examples were employed, or whatever explanation was offered, it simply
didn't work.
Analyzing assessment results in this way means setting aside some powerful ego issues. Many teachers
may initially say, “I taught them. They just didn't learn it!” But on reflection, most recognize that their
effectiveness is not defined on the basis of what they do as teachers but rather on what their students are
able to do. Can effective teaching take place in the absence of learning? Certainly not.
Some argue that such a perspective puts too much responsibility on teachers and not enough on
students. Occasionally, teachers respond, “Don't students have responsibilities in this process? Shouldn't
students display initiative and personal accountability?”
Indeed, teachers and students share responsibility for learning. Even with valiant teaching efforts, we
cannot guarantee that all students will learn everything excellently. Only rarely do teachers find items or
assessment criteria that every student answers correctly. A few students are never willing to put forth the
necessary effort, but these students tend to be the exception, not the rule. If a teacher is reaching fewer
than half of the students in the class, the teacher's method of instruction needs to improve. And teachers
need this kind of evidence to help target their instructional improvement efforts.
The Benefits of Assessment
Using classroom assessment to improve student learning is not a new idea. More than 30 years ago,
Benjamin Bloom showed how to conduct this process in practical and highly effective ways when he
described the practice of mastery learning (Bloom, 1968, 1971). But since that time, the emphasis on
assessments as tools for accountability has diverted attention from this more important and fundamental
purpose.
Assessments can be a vital component in our efforts to improve education. But as long as we use them
only as a means to rank schools and students, we will miss their most powerful benefits. We must focus
instead on helping teachers change the way they use assessment results, improve the quality of their
classroom assessments, and align their assessments with valued learning goals and state or district
standards. When teachers' classroom assessments become an integral part of the instructional process
and a central ingredient in their efforts to help students learn, the benefits of assessment for both students
and teachers will be boundless.
References
Barton, P. E. (2002). Staying on course in education reform. Princeton, NJ: Statistics & Research
Division, Policy Information Center, Educational Testing Service.
Bloom, B. S. (1968). Learning for mastery. Evaluation Comment (UCLA-CSEIP), 1(2), 1–12.
Bloom, B. S. (1971). Mastery learning. In J. H. Block (Ed.), Mastery learning: Theory and practice. New
York: Holt, Rinehart & Winston.
Bloom, B. S., Madaus, G. F., & Hastings, J. T. (1981). Evaluation to improve learning. New York:
McGraw-Hill.
Guskey, T. R. (1997). Implementing mastery learning (2nd ed.). Belmont, CA: Wadsworth.

Using Assessment Data in the


Classroom
HOME  » USING ASSESSMENT DA TA IN THE CLA SSROOM

USING ASSESSMENT DATA IN THE CLASSROOM


By Wes Gordon
  Posted November 28, 2016
 In Assessment
 0

Using Assessment Data in the


Classroom. As teachers, we have so many tools at our disposal that it can become overwhelming to
sort through all the items in the toolbox and select the one that will most benefit our students. So
often I have found myself planning that perfect lesson for my students only to get sidetracked by
over-analyzing the best tool to use for a given task. Though, in any given lesson, the most important
tool I have is the assessment tool.
I know what you are thinking: Assessment? Do you mean testing? Surely you do not count testing as
a tool to be used daily? You are right, though not entirely. Assessment as an umbrella term for data
collection and testing is a valid part of our teaching practice; however, it is not everything.
Assessment as a tool for collecting data on how well our students are learning can and should take
many forms. What form it takes should depend on several key points. First, the objective of the
lesson, or what it is the children are meant to learn, should help determine how we know learning has
happened. From the beginning stages of lesson planning, thinking about how the students will show
what they have learned will help to decide what tool we will use to measure the students’ learning.
This is called backward planning, and it is not teaching to the test.
Another key question to ask ourselves is, “What will we do with the data we collect from our
assessments?” The answer should be to use the data as a means to formatively assess what are
student know and are able to do, which, ultimately, will inform follow-up lessons. Thomas Guskey
explains in his article, “How Classroom Assessments Improve Learning,” that assessments need
to serve as meaningful sources of information that should not mark the end of learning for the
students. Instead, Guskey says that assessments need to be followed by high-quality, corrective
instruction designed to remedy whatever learning has not occurred. (Educational Leadership,
February 2003). I refer to this type of instruction as “Data-Driven” instruction because it is just
that. The teacher collects information, or data, on the students and creates their lesson based on the
findings. To further support the idea that an assessment should not be the end of learning, think
about what we do as our students are working in class. We circulate through out the class as they
work, taking note of what the students are doing and saying. When we find that a strong majority of
children do not seem to understand the task or standard, we stop, regroup, and reteach. This is
assessment in a formative role, and we do it without even thinking about it.

Regardless of what form the assessment ends up


taking, we need to be sure it is purposeful and meaningful for student learning; otherwise, the
assessment is useless. As with every piece of a lesson, assessment requires planning if it is to be of
value. This should seem obvious; however, sometimes there are assessments we are required to
administer. Several years ago, when I was teaching second grade, the district I worked for used the
then-popular DIBELS assessment for reading. DIBELS is a student performance, data collection
system developed by the University of Oregon’s Center on Teaching and Learning. What it is meant
to assess is the basic early literacy skills of students in the early stages of literacy
development. DIBELS includes an extensive battery of assessments that the school district used for
benchmarking student progress. What it meant for my second graders was a few days, three times a
year, to assess their Oral Reading Fluency, or ORF. Yes, this completely disrupted our teaching
schedule, but it allowed us to discover how many words correct per minute our students could read!
Each child, given three short texts to read, timed for one minute, would get an average of their
reading, and we could use that to measure their approximate reading level. Or so I thought. What a
shock it was for me when our reading coach told me that we were not to average the three scores,
just look at the score from the second text!
It was this moment in my career that I had to stop and think about the purpose of assessment and
what it meant for our students. This experience affected my view of programs like DIBELS for many
years because I felt the data we were collecting was not useful and meaningful to my instruction. My
point here is not to complain about big budget assessments like DIBELS (Dr. Timothy Shanahan
states in his blog that studies have shown high correlation between some DIBELS subtests
and improved student performance – when administered as intended.). Instead, I use this
example as a means to reflect on our instructional practice and to urge us to use the most relevant
and purposeful assessments for our students when making decisions about our instruction.
Assessment takes on so many different forms in today’s classroom. Everything from formal, statewide
testing to quiet observation of students’ effort while they work. Whatever methods of assessment
you choose to use for collecting data on your students’ growth and level of understanding needs to
be done with intent and purpose. Like the instruction itself, be sure that what you choose fits the
students you teach and that it effectively gathers data you can use to best deliver high quality
instruction for your class.

Summative assessment also informs


curriculum and instruction
If an entire class does poorly, though, teachers may want to reexamine their teaching
process to see if the student gap is the result of a failure to connect students with the
material. According to Marcy Emberger, former director of the Maryland Assessment
Consortium and former professional data development specialist, this should motivate
teachers to consider revising or restructuring class materials or teaching strategies in order
to ensure class learning goals are met.
These issues can be brought to team discussions or professional teaching communities in
order to facilitate a non-judgmental conversation about improving classroom practice.
Emberger suggests that teachers participate in a protocol that follows specific rules, with the
teacher presenting the materials, assessment, and issues raised, followed by a team
thinking time, discussion and questions (with the individual teacher removed from the
discussion or present but silent). This results in a plan for the future, which may include
reteaching material.

Formative assessments provide immediate


feedback on lesson plans
A final, and perhaps most important, data set for teachers is collected through formative
assessments. These are informal, low-stakes assessments using a thumbs up or thumbs
down, the stoplight method, exit slips, or brief quizzes to measure learning immediately after
lectures or classroom activities. Information gleaned from this process allows for quick
modification to the next class’s  plan and identifies learning gaps long before they show up
in a summative assessment or become an issue in standardized testing.
By itself, data cannot solve America’s education problems; however, the collection of data
at the standardized, formal, and informal assessment levels gives  teachers a way to
understand student needs, group students based on strengths and weaknesses, and design
(and adjust) lesson plans to ensure that students continuously improve.

You might also like