Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Testing Writing: Reliability

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

TESTING WRITING

An important area of concern in testing is how students view their own


achievements. Often students’ expectations of test results differ from actual
results. Students’ grade expectations are often higher, which may negatively
affect student motivation. This situation calls for raising students’ awareness of
their abilities.

Another question that many ELT programs are addressing is how do


students perceive the process used to evaluate their work? Do they know how
they are being tested and what is acceptable by the standards of the institution
and their teachers?

The present study focuses on evaluating student essays, that is, assigning
scores in order to indicate proficiency level.

Reliability
Reliability is the degree to which the scores assigned to students’ work
accurately and consistently indicate their levels of performance or proficiency.

Validity
Validity is the degree to which a test or assignment actually measures what it is
intended to measure. There are five important aspects of validity (Hamp-Lyons
1991; Jacobs et al. 1981):

1. Face validity Does the test appear to measure what it purports to measure?

2. Content validity Does the test require writers to perform tasks similar to what
they are normally required to do in the classroom? Does it sample these tasks
representatively?

3. Concurrent validity Does the test require the same skill or sub-skills that other
similar tests require?

4. Construct validity Do the test results provide significant information about a


learner’s ability to communicate effectively in English?

5. Predictive validity Does the test predict learners’ performance at some future
time?
To what extent should we teachers comunicate these reliability and
validity concerns to our students? Teachers’ awareness of the issues of
reliability and validity is crucial, but perhaps equally important is how
accurately students perceive their own abilities and the extent to which they
understand what is considered acceptable EFL writing at the university level.

Implications
The results obtained from this survey reveal that students and their instructors
have different perceptions of acceptable essay writing. This has important
implications for writing evaluation in the university’s EFL program.

Teachers need to help students increase their awareness and understanding of


the proficiency levels required in writing essays.

One way teachers can do this is by showing their students sample essays,
perhaps drawn from the students’ own work, that represent each of the grade
levels from poor to excellent. These model essays could be photocopied for the
class so that they can be read and discussed in detail.

Students could take part in practice evaluation sessions by assigning grades for
each sample essay, including the three sub-skills language, organization, and
content, according to the criteria for essays used by the EFL program. Such
practice evaluation could be done in small groups, with each group justifying the
grades it assigns in short oral presentations to the rest of the class, followed by
questions and discussion.

Once this exercise is done, the teacher could discuss the different grade ranges
and comment on the grades assigned by the groups in light of what grades the
essays would likely receive in a testing situation.

A second way to raise students’ awareness of essay evaluation criteria is through


individual or small group conferences held periodically with the teacher. In fact,
although student- teacher conferences are carried out irregularly, they have been
quite successful in the EFL program at the university, especially for lower
proficiency level writers. Students become more involved in the evaluation
process and more aware of what is expected in their essays, and thus realistically
build confidence in their writing.

In addition to these awareness-raising activities, teachers need to revisit


periodically the writing criteria being used for essay evaluation in light of
recent research and innovations in teaching writing.

Teachers also might need to clarify criteria for the different proficiency levels
for the various types of writing tasks assigned throughout a semester. Essay tests
in certain rhetorical modes, such as narration or description, might require
different evaluation criteria than those used for essays in the comparison or
contrast mode. Although the essay tests included in this survey were from the
end of the semester, teachers might want to consider whether they should
evaluate essays written earlier in the course according to objectives covered up
to that point.

Conclusion
Testing is an inextricable part of the instructional process. If a test is to
provide meaningful information on which teachers and administrators can base
their decisions, then many variables and concerns must be considered. Testing
writing is undeniably difficult. Although we teachers try hard to help students
acquire acceptable writing proficiency levels, are we aware that perhaps our
students do not know what is expected of them and do not have a realistic
concept of their own writing abilities?

This article has reported the grade expectations of students and the actual
grades they earned on two important end-of-semester essays. Results show that
students’ expectations are significantly higher than their actual proficiency
levels. Developing test procedures for more valid and reliable evaluation is
necessary and important; however, it does very little to motivate students to
continue learning if their perceived levels of performance are not compatible
with those of their teachers. In addition to the need to develop valid and reliable
testing procedures, we must not overlook the need to raise students’ awareness
of their abilities. It is perhaps only through this understanding that genuine
learning occurs.

You might also like