Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

EFL Assessment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

EFL ASSESSMENT

Didáctica Específica del inglés II


Programa de Formación Pedagógica
OUTLINE
Types of marking

Test qualities

Reliability

Validity

Practicality

Issues of testing:

Test anxiety

Wahsback
What is assessment?

Systematic process of evaluating and measuring collected data and


information on students’ language knowledge, understanding, and
ability in order to improve their language learning and
development.

The process of measuring an individual’s performance on a given


task in order to make inferences about their abilities. It can take
different forms including tests, quizzes, interviews, written
samples, observations, and so on.
Types of tests

Criterion-referenced

Norm-referenced
Criterion-referenced
CRT It is designed to measure student performance and outcomes using a
fixed set of previously determined and concise written criteria which students
are supposed to know at a specific stage or level of their education. CRTs are
used to measure whether students have acquired a specific knowledge or skill
set based on the standards set by the school. Glaser (1963).

Students pass the test or are considered proficient when they perform at or
above the established qualifications.

Test types like open-ended questions, multiple-choice questions, true-false


questions, and/or a combination of all. Teachers’ classroom tests and quizzes
can be considered CRTs because they are based on the curricular standards of
the students’ grade level.
Norm-referenced testing (NRT)

NRT is a test designed to use a bell-curve to rank test takers. This bell
curve has the majority of students clustering in the middle of the
curve and few people scoring on the low and high ends of the curve.

NRTs compare a student’s performance to the performance of


others in areas of knowledge. It measures how, and to what extent a
student is performing ahead of or behind the norm. The items used
in NRTs vary in difficulty and are chosen in a way that discriminates
between high and low achievers. Glaser (1963)

IELTS, SAT and GRE exams are good examples of NRTs.


Types of marking

Holistic marking (global marking)

Mostly used to evaluate productive skills. It uses integrated criteria to reflect the
overall effectiveness of written or spoken discourse. It may also reflect the impact
that a speaker or writer has on the listener or reader. Holistic scores can be used
together with analytic scores in order to arrive at an accurate and informed decision.

Analytic marking (analytic scoring)

It is a method of scoring or evaluating that allocates individual scores for different


aspects of a student’s performance. For instance, in analytic marking of students’
writing, different elements of writing like organization, grammar, word choice, and
mechanics are evaluated separately.
Test Qualities
Reliabilit y
Test consistency against itself and other measures.

The questions to be raised are:

1) Are test results dependable and trustworthy?

2) If a student took the same test the following day would the test results be the same?

Reliability is a fundamental criterion of a good test. It is regarded as an attribute of validity.


Reliability for selected response items is measured through indices such as Cronbach’s alpha.

Example 1: A rubric is reliable if taken by different test-takers on a same candidate, it delivers similar
results.

Example 2: In SIMCE de inglés, higher achiever students get higher scores and lower achiever
students get over scores.
Validit y

Validity refers to ‘the degree to which’ or ‘the accuracy with which’


an assessment measures what it is supposed to measure.

Since the 1980s there has been a general consensus that it is more
appropriate to talk about the validity of the uses and interpretations
of a test, rather than the test itself.

A test could be valid for some uses for some test takers, but not for
others.
Construct Validit y

It means to what extent a test is able to measure its claims. Construct validity is fundamental to
the overall validity of the test.

The extent to which a language test is representative of an underlying theory of language learning.

Cyril Weir (2013) 3 central aspects of construct validity.

Cognitive validity: Do the cognitive processes required to complete test tasks sufficiently
resemble the cognitive processes a candidate would normally employ in non-test conditions.

Are the range of processes elicited by test items sufficiently comprehensive to be considered
representative of real-world behavior.

Are the processes appropriately calibrated to the level of proficiency of the learner being
evaluated?
Content Validity

How well a test or an assessment represents all aspects of a given


construct.

For example, a teacher gives students a test on reading comprehension.


The aim of this test is to assess students’ reading comprehension. If the
test does indeed measure all appropriate aspects of reading
comprehension, then it is said to have content validity.

The extent to which the items or tasks on a test constitute a representative


sample of items or tasks of the knowledge or ability to be tested. In a
classroom-teaching context, they are related to a syllabus or course.
Validit y
PRACTICALITY

Practicality can refer to a criterion for selecting specific types of


assessments; concretely to the teacher or institution’s ability to
administer the assessment within the constraints of time,
space, staffing, resources, government/institutional
policies, and candidates or parents’ own preferences,
among others. For example, formal oral interviews can be omitted
as part of a course’s evaluation scheme due to practicality, favoring
instead portfolio-based speaking assessments.

How would this criterion apply to your case this year?

You might also like