Constructing A Language Assessment Knowledge Base
Constructing A Language Assessment Knowledge Base
Constructing A Language Assessment Knowledge Base
http://ltj.sagepub.com/
Published by:
http://www.sagepublications.com
Additional services and information for Language Testing can be found at: Email Alerts: http://ltj.sagepub.com/cgi/alerts Subscriptions: http://ltj.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://ltj.sagepub.com/content/25/3/385.refs.html
The competencies required for conducting assessment in the educational context have recently been reformulated in view of social constructivist perspectives and the acknowledgement of the role of classroom assessment in promoting learning. These changes have impacted the knowledge base language assessors need to obtain, and hence the contents of language assessment courses. This paper considers the possible components of a language assessment literacy knowledge base, and proposes the establishment of a core knowledge framework for courses in language assessment. Keywords: assessment culture, assessment literacy, language assessment courses, language assessment literacy, language testing courses
Testing concepts have until recently formed the framework within which various facets of evaluating language knowledge/ability were researched and discussed. This is clearly reflected in the name of this very journal now celebrating its anniversary. It is also language testing experts who have been disseminating knowledge in courses on the theoretical and practical aspects of language testing the world over. The last few years, however, have seen the introduction of assessment terminology into the language evaluation research discussion, signaling not merely a semantic change but a profound conceptual one, with assessment perceived to be an overarching term used to refer to all methods and approaches to testing and evaluation whether in research studies or educational contexts (Kunnan, 2004, p. 1). With this broader understanding comes a recognition of the need for multiple forms of assessment for collecting information for various purposes in diverse contexts (Huerta-Macias, 1995). However, this conceptual
Address for correspondence: Ofra Inbar-Lourie, School of Education, Tel-Aviv University, TelAviv 69978, Israel; email: inbarofra@bezeqint.net
2008 SAGE Publications (Los Angeles, London, New Delhi and Singapore) DOI:10.1177/0265532208090158
386
shift goes beyond notions of alternative assessment (or alternatives in assessment, Brown & Hudson, 1998), perceiving the language evaluation process as a socially constructed activity embedded in the local context with teachers, students and other community members recognized as meaningful assessment partners (Leung, 2004; Lynch, 2001; Lynch & Shaw, 2005; McNamara & Roever, 2006). This change of thinking has significant implications for the knowledge base in the field and hence for courses that have hitherto been referred to as language testing courses, and lately include titles which incorporate an assessment perspective (such as Language Assessment and Evaluation, Read & Erlam, 2007; or Assessment in the Language Classroom, OLoughlin, 2006). While some available courses have retained their language testing orientation and deal mostly with issues pertaining to the design and use of tests as the means for determining language proficiency (Bailey & Brown, 1995, and see Brown & Bailey, this issue), others incorporate additional assessment components, as well as an examination of the social roles of tests and testers in the assessment process (for example, Kleinsasser, 2005). This paper will focus on the knowledge base in courses in language assessment geared primarily for language teachers but also for other stakeholders who oversee the language assessment process in educational contexts. The term assessment and hence language assessment courses will be used hereafter to distinguish such courses from those operating with a more traditional language testing paradigm. The paper will begin by surveying the components of assessment literacy in general education and the implications this holds for language assessment literacy. It will then focus on the crux of the issue, i.e., the assessment skills and understandings currently perceived as vital for conducting language assessment in educational settings. These competencies will then be linked to research on language testing and assessment courses, culminating with a proposal for establishing a core language assessment knowledge base.
I Assessment literacy Views about the knowledge base required for conducting assessment in education have been influenced by socio-cultural approaches towards teaching and learning and the growing recognition of the social role of assessment in education (Broadfoot, 1996; 2005; National Research Council, 2001). In a review of educational learning theory, from behaviorist to cognitive social-constructivist models,
Ofra Inbar-Lourie
387
Shepard (2000) observes the gap between psychometric testing environments, particularly those employing high-stakes standardized tests (i.e., testing cultures, Wolf, Bixby, Glenn, & Gardner, 1991), and contemporary learning environments which follow Vygotskian theories and are referred to as learning cultures. Learning cultures are grounded in interpretive epistemology which views reality as the subject of social construction. Learning is perceived as a culturally situated and mediated activity, with language occupying a fundamental role as a mediating tool (Lantolf, 2000). The learning community jointly co-constructs knowledge, and teachers feedback plays a central role in supporting, and promoting students language learning (for a detailed description of the constructivist class, see Windschitl, 2002). The assessment environment congruent with learning cultures is referred to as an assessment culture (Dochy & Segers, 2001; Shepard, 2000). Experts and practitioners who function within an assessment culture share epistemological suppositions about the dynamic nature of knowledge, as well as assumptions about students, teaching and learning. Students are viewed as active empowered partners in the assessment process who monitor their own learning, provide feedback to their peers and set criteria for evaluating progress. The teachers role is geared to formulating and scaffolding learning on the basis of on-going feedback from internal and external assessment sources. Learning development is monitored and recorded regularly focusing on both the process and product dimensions, and student evaluation is provided in the form of a profile rather than a numerical score (Birenbaum, 1996; Wolf et al., 1991). Seeing that learning and assessment are viewed as intertwined, assessment culture highlights the notion of assessment for learning emphasizing formative assessment practices (Black & Wiliam, 1998), and introducing suitable assessment tools such as Dynamic Assessment (Lantolf & Poehner, 2008). Conducting assessment within this constructivist contextuallysituated framework necessitates internalization of assumptions and beliefs about assessment as a social practice and a social product (Filer, 2000), and awareness of the possible intended or unintended consequences particular assessment measures and actions may have for specific groups and individuals. It also requires appreciation of the role of assessment in the instructional-learning cycle (Brookhart, 2003; Gipps, 1994), and underscores the teachers dual (and often fuzzy) role as both teacher and assessor of curriculum attainment, in addition to being (in the case of the language teacher) a facilitator of language development (Rea-Dickins, 2007, p. 193).
388
Since assessment also operates on the external institutional level for ranking, monitoring and placement purposes, teachers and administrators are expected to be familiar with external assessment formats, assessment procedures and data analysis, so as to interpret the results and feed them into their teaching. They also need to gain understanding of the competing and often contradictory forces at play between the testing and assessment cultures. This is especially noticeable in contexts where practitioners function simultaneously within two non-compatible cultures: encouraged within their classrooms to pursue socio-culturally based classroom pedagogy and assessment practices (often referred to as alternative assessment, Hargreaves, Earl & Schmidt, 2002), while concurrently required by external authorities to abide by the rules of testing cultures (McKay & Brindley, 2007). Many issues in this dual complex reality are far from resolved. Teasdale and Leung (2000) analyze how this mixed discourse has contributed to a muddled view of the meanings attributed to assessment (p. 176). This is most prominent in the area of validity, and the debate centers on whether to employ positivist forms of inquiry or broaden the scope, using methodologies that are compatible with the assumptions that underlie interpretive assessment cultures (Lynch & Shaw, 2005). In view of these changed perspectives, a reformulation of the competencies needed for conducting assessment in the educational context is called for. McMillan (2000) lists eight assessment principles currently deemed fundamental for teachers and school administrators, attempting to form a comprehensive body of knowledge which takes into account the different perspectives and allows teachers and administrators access to the issues at hand. The principles include reference to the potential tensions which impact decision-making in assessment, that is, tensions between formative and summative assessment, between criterion- and norm-referenced approaches, between traditional and alternative assessment formats and between external standardized testing versus classroom tests. Additional principles refer to the formative role of assessment in instruction, to the importance of using multiple means for assessing learners and to the need for fair, ethical, valid, and at the same time efficient and feasible, assessment practices (McMillan, 2000). Similar views are reflected in the Standards for Teacher Competence in Educational Assessment of Students (followed by Standards for Educational Administrators) published already in 1990 by the American Federation of Teachers, National Council on Measurement in Education and the National Education Association. The standards refer to activities occurring prior to, during and
Ofra Inbar-Lourie
389
following instruction, involving decision-making in the school and school district, as well as in a wider community of educators. They range from the ability to choose and develop assessment tools to match instruction to the skillful administration, scoring and interpretation of externally and internally administered assessment procedures. They also incorporate standards for utilizing assessment data for various teaching purposes, for reporting results to different parties, and recognizing unethical, illegal, and otherwise inappropriate assessment methods and uses of assessment (1990, p. 7). In order to operationalize and implement this conceptual framework one needs to gain literacy in assessment concepts, skills and strategies. Assessment literacy (Boyles, 2005; Malone, 2008; Stoynoff & Chapelle, 2005) is the ability to understand, analyze, and apply information on student performance to improve instruction (Falsgarf, 2005). Becoming assessment literate requires the attainment of a toolbox of competencies, some practical and some theoretical, on why, when and how to go about constructing a variety of assessment procedures (Boyles, 2005; Hoyt, 2005). Being literate in assessment thus means having the capacity to ask and answer critical questions about the purpose for assessment, about the fitness of the tool being used, about testing conditions, and about what is going to happen on the basis of the results. The manner and process of acquiring assessment literacy, similar to professional development initiatives in socio-cultural pedagogy (see Teemant, Smith, Pinnegar & Egan, 2005), assume a constructivist learning approach, whereby the participants and assessment experts form a knowledge community by discussing, critiquing and questioning fundamental issues relevant to their context. The concepts and derived assessment skills outlined above are perceived to be core competencies in the area of assessment. The question is what additional knowledge language assessors require, and whether or to what extent such knowledge is incorporated into language assessment courses.
II Language assessment literacy The very existence of language assessment courses indicates that expertise in language assessment requires additional competencies. The language assessment knowledge base in fact comprises layers of assessment literacy skills combined with language specific competencies, forming a distinct entity that can be referred to as language
390
assessment literacy (henceforth, LAL). Some of the LAL competency dimensions focus on the trait, or the what of language testing and assessment, while others relate to the method, the how (Shohamy, 2008). However, understanding the what and performing the how necessitates appreciation of the background and reasoning behind the actions taken, that is, the why. Each of these aspects is rooted in language-related considerations as well as in general education and assessment and testing cultures. Discussion of LAL needs to be considered with reference to current assessment developments, in particular the support for assessment for learning approaches in many parts of the world (Assessment Reform Group, 2002; Leung, 2004; Davison, 2007). Language assessors, particularly teachers, are expected to engage in classroom assessment practices, report on learners progress aligned with external criteria as well as prepare learners for external examinations. To comply with these demands Brindley (2001a) offers an outline for programs for professional development in language assessment, which focuses on the knowledge components required for conducting language assessment in an educational context. The outline is modular in that it acknowledges different assessment needs, some of which are regarded as core and some as optional, and provides a useful framework for considering the components of the LAL knowledge base. The review that follows uses the outline as a basis for analyzing and discussing LAL competencies in relation to the assessment knowledge dimensions mentioned above: the reasoning or rationale for assessment (the why), the description of the trait to be assessed (the what), and the assessment process (the how). 1 The why The first core module in Brindleys proposal provides the background and rationale for assessment. The focal point is the social, educational, and political aspects of assessment in the wider community, looking at questions of accountability, standards, ethics and the role in society of standardized competitive examinations and tests (Brindley, 2001a, p. 129). This social perspective concurs with what McNamara (2006) refers to as the social turn that the language assessment field has taken in the last decade. It implies epistemological shifts regarding knowledge construction as well as critical views on the role of language tests in society, on the need to democratize assessment and on the responsibility of language testers (Kunnan, 2000; Lynch, 2001; McNamara & Rover, 2006; Shohamy,
Ofra Inbar-Lourie
391
2001). Lynch and Shaw (2005) illustrate some of these issues by framing their discussion on the use of the portfolio within an assessment paradigm that is based on different suppositions than those which typify traditional testing cultures. The paradigm offered is that of an assessment culture, with the concepts of validity and ethics analyzed in terms of power relations. However, it is important to note, that the concepts of language or language assessment are not explicitly referred to in Brindleys outline as central social themes in the listings of topics to be discussed within this module. I would like to argue that the role of language evaluation in impacting on decision-making in various areas (e.g. civil, vocational, educational), needs to be accentuated and critically discussed in this initial phase of the professional development and knowledge attainment and construction of future language assessors. This is crucial for it will foreground and shape perceptions of language as a social entity in terms of both the trait to be assessed and the assessment process. 2 The what The second core module in the program (Brindley, 2001a), entitled Defining and describing proficiency, presents the trait, the theoretical basis for language tests and assessment, including the concepts of validity and reliability and a critical evaluation of models of language knowledge. In practice this implies that language assessors are expected to be well versed in current theories and research findings regarding various facets of language knowledge and use, so as to skillfully implement assessment measures that are compatible with these current perspectives. For example, in the case of second language learners, language assessors need to be aware of current evidence regarding the role of the first language and culture in acquiring additional languages (Cummins, 2001). Such knowledge will guide them in ensuring that the needs of immigrant students are aptly accommodated in testing situations (Solano-Flores & Trumbul, 2003). Also pertinent to making and implementing decisions in language assessment are current debates about the norms of English as an International Language (EIL) (Canagarajah, 2006; Elder & Davies, 2006) and the linguistic competence of the multilingual speaker (Canagarajah, 2007). One could reasonably argue that these topics should form part of the knowledge base of any informed language expert or practitioner. However, expertise in LAL requires, in addition, that these theories, approaches and controversies be grafted onto competencies in assessment.
392
Furthermore, since present assessment paradigms strive for integration with teaching, knowledge about current language teaching pedagogy (in contrast with general pedagogical skills), is also part of LAL. Language assessors need to be familiar with contemporary theories about the learning, teaching and assessment of grammar, for example, so as to be able to design suitable assessment measures (Purpura, 2004). Likewise, knowledge about integrated language and content models (e.g., Kaufaman & Crandall, 2005; Met, 1999), and the specific assessment considerations such curricular models call for, is essential for designing integrated content-based tasks which adequately reflect this construct (Byrnes, 2008; CushingWeigle & Jensen, 1997). 3 The how The method or the how angle in the Brindley model is introduced in two non-core modules, each branching off in a different direction. Participants can choose (according to their context and needs) between taking the Constructing and evaluating language tests module, which focuses on test development and analysis, or a criterionreferenced focus in the module Assessment in the language curriculum. Each module details the skills learners will gain so as to perform the relevant assessment or testing process using different assessment procedures. Since the modules are optional, some of the participants may acquire knowledge in only one area, thus being denied the full range of possibilities that the language assessment field offers. The underlying assumption is that certain audiences typically engage in one type of assessment environment (in the case of teachers this would probably mean assessment rather than tests), and therefore it suffices to choose only one of the modules. The question is whether such a policy is valid. The division offered can be likened to choosing between either quantitative or qualitative research paradigms rather than acquiring initial understandings and skills in both their underlying assumptions. Since at present both research traditions are being employed in most disciplines including in the language assessment domain (where qualitative research is also in use in large-scale assessment, see, for example, Taylor, 2007), restricting the knowledge base to a single tradition would be considered problematic particularly in view of mixed method research designs. Stakeholders functioning in language assessment cultures need to attain knowledge about the different facets of language assessment, both large-scale examinations and classroom assessment, to deliver
Ofra Inbar-Lourie
393
appropriate assessment and interpret assessment outcomes (InbarLourie, 2008). The extent of the knowledge gained may differ and vary in scope and intensity depending on individual circumstances and needs, as well as personal inclinations and beliefs. In addition, partaking in the critical appraisal and interpretation of the discourse that characterizes various forces in internal and external assessment (Brindley, 2001b; Johnson & Kress, 2003; Leung & Rea-Dickins, 2007), also necessitates a sound comprehensive knowledge base in the different assessment orientations. The fifth and last component mentioned in the Brindley model extends beyond assessment and testing techniques, to include only those professionals whose context requires delving into more advanced planning and exploration of assessment initiatives and research (Brindley, 2001a, pp. 129130). However, what is currently evident from reports on the implementation of language assessment reforms is that classroom practitioners are also expected to actively participate in assessment initiatives for evaluating language ability. This is apparent, for example, in the Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR) (Council of Europe, 2001), designed to act as a frame of reference in terms of which different qualifications can be described, different language learning objectives can be identified, and the basis of different achievement standards can be set out (Morrow, 2004, p. 7). Functioning within the framework assumes not merely familiarity with its different levels and descriptors, with language teaching skills and assessment know-how. It also presumes that language assessors (in many cases teachers), can adapt the framework for their teaching and assessment purposes (Little, 2005), and thus function at what Brindley (2001a) characterizes as an advanced and more specialized level. The Brindley (2001a) proposal discussed at length in the preceding paragraphs offers interesting insights and guidelines. It sets out core competencies and allows for expansion of the assessment knowledge base beyond what is normally included in a core assessment course. Similar principles are apparent in Fulcher and Davidson (2007), whose advanced resource book is divided into an Introduction section, which presents key concepts in language testing and assessment, an Extension of these issues and concepts and a third section entitled Exploration which builds on the knowledge gained in the first two parts. To sum up, though formed on the assessment literacy knowledge base LAL can be said to constitute a unique complex entity. The fact that it is
394
concerned specifically with language means that considerations of assessment purpose trait and method must be understood in light of contemporary theories in language-related areas. The next section explores whether or to what extent these competencies in fact form part of language assessment courses.
III Language assessment courses Clearly the general purpose of language testing and assessment courses is to train language experts in language assessment concepts, skills and strategies, or in LAL. However language assessment courses do not come in one shape or size. The target audience differs considerably as do the course contents. The course may be intended for teachers (practicing or training) who are responsible for both teaching and assessment in an on-going manner, or for researchers and testing experts. What is known about the nature and contents of language assessment courses? Not very much, as little research has been targeted specifically at the courses and their objectives and contents. One of the few research studies on language testing courses was conducted by Bailey and Brown in 1995 specifically targeting lecturers teaching language testing courses. The survey questionnaire used related to a number of issues: the hands-on experience students get in the course; the general topics attended to; item analysis; descriptive statistics; test consistency; test validity; textbooks used and perceived students attitudes towards the testing course. The topics in each section were presented from a measurement orientation, presumably reflecting the researchers views on what language testing courses are likely to incorporate. All the items in the hands-on experience, for example, focus on different stages of the testing process. Findings for that section showed that the activity attended to most was test critique, and then, ordered by frequency of use were item writing, interpreting test scores, revising and then scoring tests, administrating tests and test taking. Additional findings showed that measuring the different skills is attended to most frequently, followed by proficiency testing (Bailey & Brown, 1995). The questionnaire composed more than a decade ago does not consider any form of assessment other than tests, nor other issues relevant to testing or assessment. In that sense it differs from the standards for assessment mentioned above (1990), which incorporate the use of a variety of assessment tools as well as reference to ethical conduct. Furthermore, the only mention of language-related competencies in
Ofra Inbar-Lourie
395
the questionnaire is measuring the different skills and proficiency testing (p. 253), while all the other topics are applicable to general courses in educational testing, thus relinquishing to a great extent claims that would support the recognition of language testing as a distinct construct. The researchers do in fact note the Standards for Student Assessment for Teachers mentioned above, commenting that Although the items in the questionnaire were not developed with these standards in mind, it behooves us, as a profession, to consider drafting or adapting a similar set of standards dealing specifically with language assessment (Bailey & Brown, 1995, p. 250). While the Bailey and Brown study surveyed a substantial number of respondents about their language testing courses, researchers in two more recent studies report directly and in more detail on their own language assessment course. The methodology used in both cases is narrative analysis but from different perspectives: while one study (OLoughlin, 2006) explores the course from the point of view of two of the course participants (using their written contributions on an online forum), the second research reflects on the experience of transforming the course from the instructors perspective (Kleinsasser, 2005). The descriptions of both courses allow insight into the course objectives and contents as well as the manner in which the courses were conducted. In the case of the OLoughlin (2006) study, the course investigated was a postgraduate TESOL course in Australia entitled Assessment in the Language Classroom. The course goals were to enable students to develop (a) a sound understanding of key concepts in second language assessment; (b) their ability to critically evaluate existing assessment documents, and (c) their capacity to design or adapt assessment instruments for their particular teaching contexts (p. 73). The course included both practical components and discussion of conceptual themes, such as social issues in language testing. The researcher reports that two of the students, whose narratives form the research focus of the study, both attained the course objectives. However, differences emerged as to their willingness and capacity to embrace new ideas in the area of language assessment. These differences are attributed to personal background and professional experience and context, emphasizing the need to consider the learners diverse cultural background and experiences when planning and conducting the course (OLoughlin, 2006). Hence the approach that is advocated in this research with regard to teaching language assessment courses is a learner-centered one. The second study by Kleinsasser (2005) provides an insightful description of how the language testing and assessment course that
396
had been taught for a number of years in a Master of Arts language program, was transformed from a content-focused to a learnercentered and teaching content-focused course. The course put into practice constructivist notions of student involvement at every stage, with teacher mediation in the form of on-going negotiation. The course contents included a wide range of topics ranging from handson item writing, construct validity, providing feedback, developing standards and assessing reasoning skills, to notions of impact and critical perspectives to language testing, and, as Kleinsasser notes (p. 83) changing thinking from the use of psychometrics to a focus on (second language) educational assessment. Through the discussions, critiquing and writing of testing and assessment items, a professional community of practice was formed. Our community was interpreting, expressing and negotiating meaning about the practices and theories related to assessment materials while experiencing the non-linear process of assessment materials development (p. 90). The final course evaluation was also a joint collaborative venture with the students taking an active part in the decision-making process including the final course grade. The two research studies (Kleinsasser, 2005; OLoughlin, 2006) report on courses which appear to incorporate LAL themes. What is particularly striking, especially in Kleinsasser (2005), is the manner in which learning was conducted and knowledge constructed, exemplifying traits associated with both learning and assessment cultures.
IV Acquiring language assessment literacy: A common framework Acquiring expertise in the different aspects of language assessment is clearly a multifaceted process of learning about and mastering diverse competencies in different areas as was demonstrated above. Course instructors thus face the difficult dilemma of choosing and prioritizing the issues which will be included or excluded from the course syllabus. Based on the above discussion of LAL it is suggested that language assessment courses focus on learning, negotiating, discussing, experiencing and researching a core language assessment framework. This framework construes language assessment not as a collection of assessment tools and forms of analyses, but rather as a body of knowledge and research grounded in theory and epistemological beliefs, and connected to other bodies of knowledge in education, linguistics and applied linguistics. The core competences will reflect current
Ofra Inbar-Lourie
397
views about the social role of assessment in general and language assessment in particular, contemporary views about the nature of language knowledge, and give due emphases to both classroom and external assessment practices The focus and intensity would vary and depend on the target audience, but an introduction to the core components will be obtained by all participants, including discussion of some of the unresolved controversies and tensions in the field. To borrow a language learning metaphor, mastering proficiency in the discrete skills of language assessment (i.e., specific item types or data analysis procedures), will not necessarily lead to acquiring LAL. On the other hand, integration of the conceptual basics in the educational and language-related zones will enable the would-be language assessors to gain an initial footing in the field, and speak the language of assessment. Additional competencies can be developed, refined and elaborated upon for specific purposes, such as constructing high- stakes assessment instruments or setting and implementing language assessment initiatives. The life-long learning approach which characterizes learning in this era (Kalantzis, Cope & Harvey, 2005), is applicable to language assessors who, similar to other professional groups, need to constantly inform and update their knowledge in the area of measurement and language theories vis--vis new research findings, new technological innovations and approaches in related areas. A report by the Committee on Assessment and Evaluation in Education in the National Israeli Academy for Sciences and Humanities (2005), establishes a common framework or knowledge base in assessment and evaluation. The framework is based on shared concepts intended to guide the development of curricula and professional development programs in the area. The core framework also branches off to outline the specific knowledge base required for different professional groups: teachers, principals, assessment and evaluation coordinators, program evaluators and psychometricians. The knowledge base includes issues in educational assessment and large scale assessment, statistics, ethical issues, and program evaluation. The framework details precisely which of the knowledge components needs to be acquired as declarative or procedural knowledge, with different levels of proficiency or mastery in each. Considering the major changes that have occurred in the last decade in educational and language assessment, perhaps it is time that the language testing and assessment profession takes a similar initiative. The time is ripe for revisiting the notion of drafting or adapting a set of standards for language assessment competencies as
398
was expressed by Bailey and Brown in their 1995 paper. Such core competencies would be intended for language assessment professionals in educational contexts, and may also detail particular skills for specific capacities. Some of the skills can be integrated with other program components in language education programs and with research studies in various areas in applied linguistics and language pedagogy. Such an initiative will greatly facilitate the meaningful construction of language assessment courses and make a major contribution to the field at large.
V References
Assessment Reform Group. (2002). Assessment for learning: Ten principles. Available at: http://arg.educ.cam.ac.uk/CIE3.pdf Bailey, K. M. & J. D. Brown. (1995). Language testing courses: What are they? In A. Cumming & R. Berwick (Eds.), Validation in language testing (pp. 236256). Clevedon, UK: Multilingual Matters. Birenbaum, M. (1996). Assessment 2000: Towards a pluralistic approach to assessment. In M. Birenbaum & F. Dochy. (Eds.), Alternatives in assessment of achievements, learning processes and prior knowledge (pp. 329). Boston, MA: Kluwer Academic. Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment, Phi Delta Kappan International. www.pdkintl.org/kappan/kbla9810.htm Boyles, P. (2005). Assessment literacy. In M. Rosenbusch (Ed.), National Assessment Summit Papers. (pp. 115). Ames, IA: Iowa State University. http://www.nflrc.iastate.edu/nva/newsite/ciaa/assessment_ papers_Boyles. html Brindley, G. (2001a). Language assessment and professional development. In C. Elder, A. Brown, K. Hill, N. Iwashita, T. Lumley, T. McNamara, & K. OLoughlin (Eds.), Experimenting with uncertainty: Essays in honour of Alan Davies. Cambridge: Cambridge University Press, pp. 126136. Brindley, G. (2001b). Outcomes-based assessment in practice: some examples and emerging insights. Language Testing, 18(4), 393408. Broadfoot, P. (1996). Education assessment and society. Buckingham: Open University Press. Broadfoot, P. (2005). Dark alleys and blind bends: Testing the language of learning. Language Testing, 22(2), 123141. Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and uses. Educational Measurement: Issues and practices, 22(4), 513. Brown, J. D. & Hudson, T. (1998). The alternatives in language assessment. TESOL Quarterly, 32(4), 653675. Byrnes, H. (2008). Assessing content and language. In E. Shohamy (Ed.). Language Testing and Assessment (Vol. 7). In N. Hornberger (general
Ofra Inbar-Lourie
399
editor), Encyclopedia of Language and Education (2nd ed.). New York: Springer Science and Business Media, pp. 3752. Canagarajah, S. (2006). Changing communicative needs, revised assessment objectives: Testing English as an international language. Language Assessment Quarterly, 3(3), 229242. Canagarajah, S. (2007). The ecology of Global English. International Multilingual Research Journal, 1(2), 89100. Committee on Assessment and Evaluation in Education (2005). The knowledge base for assessment and evaluation in education. A framework for curricula: Academic studies and professional development programs. Israel Academy of Science and Humanities http://www.academy.ac.il/ data/projects/34/Assessment_and_Evaluation.pdf Council of Europe (2001). Common European Framework of Reference for Languages: learning, teaching, assessment. Cambridge: Cambridge University Press. Cummins, J. (2001). Negotiating identities: Education for empowerment in a diverse society (2nd ed.). Los Angeles: California Association for Bilingual Education. Cushing Weigle, S. & Jensen, L. (1997). Issues in assessment for contentbased instruction. In M. A. Snow and D. Brinton (Eds.), The Contentbased classroom: Perspectives on Integrating Language and Content. White Plains, NY: Longman, pp. 20112. Davison, C. (2007). Views from the chalkface: English language school-based assessment in Hong-Kong. Language Assessment Quarterly 4(1), 3768. Dochy F. Segers M. (2001) Using information and communication technology (ICT) in tomorrows universities and using assessment as a tool for learning by means of ICT. In Van der Molen, H. J. (Ed.), Virtual university? Educational environments of the future. London: Portland, pp. 6783. Elder, C. & Davies, A. (2006). Assessing English as a lingua franca. Annual Review of Applied Linguistics, 26, 282304. Falsgraf, C. (2005, April). Why a national assessment summit? New visions in action. National Assessment Summit. Meeting conducted in Alexandria, Va. http://www.nflrc.iastate.edu/nva/worddocuments/assessment_2005/ pdf/nsap_introduction.pdf Filer, A. (Ed.). (2000). Assessment: Social practice and social product. London: RoutledgeFalmer. Fulcher, G. & Davidson, F. (2007). Language testing and Assessment. An Advanced resource book. London: Routledge. Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press. Hargreaves, A., Earl, L. & Schmidt, M. (2002) Perspectives on alternative assessment reform. American Educational Research Journal, 39(1), 6995. Hoyt, K. (2005, April). Assessment: Impact on instruction. New vision on Action. National Assessment Summit, Alexandria, Va. Huerta-Macias, A. (1995). Alternative assessment: Responses to commonly asked questions. TESOL Journal, 5(1), 811. Inbar-Lourie, O. (2008). Language assessment culture. In E. Shohamy (Ed.). Language testing and assessment (Vol. 7). In N. Hornberger
400
(general editor), Encyclopedia of language and education (2nd ed.). New York: Springer Science and Business Media, Inc., pp. 285300. Johnson, D. & Kress, G. (2003). Globalisation, literacy and society: Redesigning pedagogy and assessment. Assessment in Education, 10(1), 514. Kalantzis, M., Cope, B. & Harvey, A. (2003). Assessing multiliteracies and the new basics. Assessment in Education, 10(1), 1526. Kaufman, D. & Crandall, J. (2005). Standards and content-based instruction: Transforming language education in primary and secondary schools. In Kaufman, D. & Crandall, J (Eds.). Content-based instruction in primary and secondary school settings. Case Studies in TESOL. Alexandria (Virginia): TESOL Publications, pp. 17. Kleinsasser, R.C. (2005). Transforming a postgraduate level assessment course: A second language teacher educators narrative. Prospect 20, 77102. Kunnan, A. J. (Ed.). (2000). Fairness and validation in language assessment: Selected papers from the 19th Language testing research colloquium, Orlando, Florida. Cambridge, UK: Cambridge University Press. Kunnan, A. J. (2004). Regarding language assessment. Language Assessment Quarterly, 1(1), 1. Lantolf, J. P. (2000). Second language learning as a mediated process. Language Teaching, (33)2, 7996. Lantolf, J. P. & Poehner, M. E. (2008). Dynamic assessment. In E. Shohamy (Ed.). Language testing and assessment (Vol. 7). In N. Hornberger (general editor), Encyclopedia of language and education (2nd ed). New York: Springer Science and Business Media, Inc., pp. 273284. Leung, C. (2004). Developing formative teacher assessment: Knowledge, practice and change. Language Assessment Quarterly, 1(1), 1941. Leung, C. & Rea-Dickins, P. (2007) Teacher assessment as policy instrument: Contradictions and capacities. Language Assessment Quarterly, 4(1), 636. Little, D. (2005). The Common European Framework and the European Language Portfolio: Involving learners and their judgment in the assessment process. Language Testing, 22(3), 321336. Lynch B. K. (2001). Rethinking assessment from a critical perspective. Language Testing, 18(4), 351372. Lynch, B. & Shaw, P. (2005). Portfolios, power and ethics. TESOL Quarterly, 39(2), 263297. Malone, M. E. (2008). Training in language assessment. In E. Shohamy (Ed.). Language testing and assessment (Vol. 7). In N. Hornberger (general editor), Encyclopedia of language and education (2nd ed). New York: Springer Science and Business Media, Inc., pp. 225240. McKay, P. & Brindley, G. ( 2007). Educational reform and ESL assessment in Australia: New reforms new tensions. Language Assessment Quarterly 4(1), 6984. McMillan, J. H. (2000). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation, 7(8). Retrieved November 9, 2007 from http://PAREonline.net/ getvn.asp?v7&n8
Ofra Inbar-Lourie
401
McNamara, T. (2006). Second language testing and assessment: Introduction. In E. Hinkel (Ed.). Handbook of research in second language teaching and learning. Mahwah: Lawrence Erlbaum Associates, pp. 775778. McNamara, T. & Roever, C. (2006). Language Testing: The social dimension. Oxford: Blackwell Publishing. Met, M. (1999). Content-based instruction: Defining terms, making decisions. College Park, MD: National Foreign Language Center. Morrow, K. (Ed.) (2004). Insights from the Common European Framework. Oxford: Oxford University Press. National Research Council (2001). Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment. J. Pelligrino, N. Chudowsky, and R. Glaser (Eds.). Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. http://www.nap.edu/openbook.php?record_id10019&pageR1 OLoughlin, K. (2006). Learning about second language assessment: Insights from a postgraduate student on-line subject forum. University of Sydney Papers in TESOL, 1, 7185. Purpura, J. E. (2004). Assessing grammar. Cambridge: Cambridge University Press. Rea-Dickins, P. (2007). Learning or measuring? Exploring teacher decisionmaking in planning for classroom-based language assessment. In S. Fotos & H. Nassaji (Eds.) Form-focused instruction and teacher education: Studies in honour of Rod Ellis. Oxford: Oxford University Press, pp. 193210. Read, J. & Erlam, R. (2007). Language Assessment and Evaluation course description. http://www.arts.auckland.ac.nz/subjects/index.cfm?P9003 Shepard, L. A. (2000). The role of assessment in a learning culture, Educational Researcher, 29(7), 414. Shohamy, E. (2001). The power of tests. Harlow, England: Pearson Education. Shohamy, E. (2008) Introduction to Volume 7: Language testing and assessment. In E. Shohamy (Ed.). Language testing and assessment (Vol. 7). In N. Hornberger (general editor), Encyclopedia of language and education (2nd ed). New York: Springer Science and Business Media, Inc., pp. xiii-xxii. Solano-Flores, G. & Trumbul, E. (2003). Examining language in context: The need for new research and practice paradigms in the testing of English language learners. Educational Researcher, 32(2), 313. Standards for teacher competence in educational assessment of students. (1990). American Federation of teachers, national council on Measurement in Education, National Education Association. http://www. unl.edu/buros/bimm/html/article3.html Stoynoff, S. & Chapelle, C. A. (2005). ESOL tests and testing: A resource for teachers and program administrators. Alexandria, VA: TESOL Publications. Taylor, L. (2007, April). Two by two: paired interaction in large-scale proficiency assessment. Paper presented at The American Association for Applied Linguistics annual conference, Costa Mesa, CA.
402
Teasdale, A. & Leung, C. (2000).Teacher assessment and psychometric theory: A case of paradigm crossing? Language Testing 17(2), 16384. Teemant, A., Smith, M., Pinnegar, S. & Egan, M. W. (2005) Modeling sociocultural pedagogy in distance education. Teachers College Record 107(8), 2005, 16751698. Windschitl, M. (2002). Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Review of Educational Research, 72(2), 131175. Wolf, D., Bixby, J., Glenn, J. & Gardner, H. (1991). To use their minds well: Investigating new forms of student assessment. Review of Research in Education, 17, 3174.