Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Assessing Writing 41 (2019) 63–67 Contents lists available at ScienceDirect Assessing Writing journal homepage: www.elsevier.com/locate/asw Writing placement tools: Constructing and understanding students’ transition into college writing T 1. Key ideas about student writing and writing placement Underpinning the Tools & Tech Forum are three ideas about student writing: One, student writing is constructed according to how it is assessed. Two, student writing is understood according to how it is analyzed. And three, because assessment and analysis constitutes writing and our understanding of it, we need as much information as possible as we determine how to assess and analyze student writing. In support of this goal, the Tools & Tech Forum offers reviews of assessment tools and technologies, in efforts to support informed decisions about writing assessments and how they are interpreted and used. This year’s Tools & Tech Forum focuses on a set of assessment practices affecting millions of students each year: college writing placement. Whether students are non-native or native writers of English, and especially if they are pursuing higher education in the United States, they will likely need to complete a writing placement assessment as their very first writing task as an enrolled college student. These writing placement assessments are used to pair incoming college students with a course or level appropriate for their writing preparation and skills by determining a point of entry within an institution’s curricular sequence (Crusan, 2002; Haswell, 2005; Leki, 1991). There are myriad options for assessing students’ writing placement. Many institutions rely on locally-designed essay tests and/or multiple-choice questions (Gere, Aull, Lancaster, Perales Escudero, & Vander Lei, 2013). Some use writing portfolios (Yancey, 1999). Many draw on national placement tests or more general standardized tests, such as the Scholastic Aptitude Test (SAT), the Test of English as a Foreign Language (TOEFL), or the American College Test (ACT) (Crusan, 2002; Elliot, Deess, Rudniy, & Joshi, 2012a). Some institutions rely on a combination of standardized and locally-designed assessments (Peckham, 2009). And while some institutions consult TOEFL scores for international and/ or English language learning students (Williams, 1995), many institutions use the same placement process for all incoming first-year students (Gere, Aull, Green, & Porter, 2010). These various writing placement choices foreground different cultural and institutional values, from broad writing constructs (e.g., emphasis on student self-assessment via Directed Self-Placement) to more specific, corresponding choices (e.g., certain Directed Self-Placement questions and not others) (Toth & Aull, 2014). The results or scores of these placement assessment are then used by academic advisers to place students in writing courses (or to exempt them from them), sometimes with input from instructors or students (Crusan, 2002; Elliot, Deess, Rudniy, & Joshi, 2012b). That placement will, in turn, directly impact students’ future coursework. Furthermore, the placement process and outcome will contribute to students’ perceptions about the kind of writing they are expected to do in higher education, even if a placement task differs substantially from what they will later write as college students (Aull, 2015). Many students will, based on their writing placement, form perceptions regarding how they fair as writers with respect to the writing constructs they perceive are valued. Thus the stakes of writing placement are high, and they entail a range of important decisions. In their introduction to the Journal of Writing Assessment Special Issue on Two-Year College Writing Placement, Kelly-Riley and Whithaus (2019), put it this way: Assessment practices reinforce cultural and educational values; the ways in which assessments—particularly writing placement assessments—work should be examined to understand the values they reinforce. If the assessments are not evolving to reflect current values and expectations, they may be detrimental to the intended social and educational effects of increasing access to higher education. Placement assessment decisions entail conceptual and practical choices, including (1) valued writing constructs—what a given institution or set of institutions wants to know about student writing; (2) the type and design of the assessment—how such information will be gathered; and (3) the interpretation and consequence of said information—how the information will be used, and by whom. Any writing placement assessment that is used over time, then, ideally includes construct evidence, which forms a https://doi.org/10.1016/j.asw.2019.06.005 Available online 14 June 2019 1075-2935/ © 2019 Elsevier Inc. All rights reserved. Assessing Writing 41 (2019) 63–67 precondition to and ongoing part of “the traditional and emerging conceptual definition of writing” in a local context, scoring evidence that rules are applied accurately and consistently, extrapolation evidence that assessment scores allow predictions beyond test performance, and decision evidence that patterns over time support an interpretative argument to justify the use of the assessment (e.g., correlation between students’ writing course placement and their retention and performance in their writing courses) (Elliot et al., 2012b, pp. 293–294). Even in such ideal scenarios, placement decisions are still “reasonable inferences”; they are “at best fuzzy”—based on probability, not certainty (Peckham, 2009, p. 67). 2. Past and present considerations for student writing placement Like any writing assessment, writing placement choices relatedly depend on what is available and possible for a given institution at a given point in history. In last year’s Tools and Tech Forum, I noted that a quarter century ago, prevailing themes in writing assessment research included task design, rater judgments, the relationship between textual features and writing quality, and new attention to educational environment and actual impact on actual students. Over the past quarter century, this research has expanded and often foregrounds the social and cognitive dimensions of written language as these variables relate to and illuminate a range of design issues, from task and rubric design to scoring processes and their accompanying interpretation and use arguments (Behizadeh & Engelhard, 2011). Within these larger developments, writing placement practices have likewise evolved considerably. Prior to the 1970s, most placement processes relied on indirect measures of writing assessment, or assessments in which students read and answer questions about the grammar, style, and lexis of written passages as an indirect measure of their writing knowledge. Fifty years later, many placement processes continue these practices, while many also include a direct measure of writing assessment, or constructed response tasks in which students write a text as a measure of their writing knowledge. Brian Huot described that despite continued imperfections in the assessment of student writing, the shifts from indirect to direct writing placement assessments evidenced “great strides” (Huot, 1994, p. 49). It is also fair to say that writing placement assessments have a great deal of room for improvement. Current placement assessments still tend to focus on individual versus intersubjective composing processes, and to emphasize individual cognitive mastery versus more interpersonal and intrapersonal domains of writing. This means, for example, that they often do not account students’ self-efficacy, which can heavily influence how students approach and persevere vis-à-vis new writing tasks (Pajares & Valiante, 2006). Many placement assessments also continue to rely on standardized test scores even as these have shown bias and misplacement (Elliot et al., 2012a; Kokhan, 2013). Too, writing placements often rely on outdated and/ or top-down practices that do not include student input. This can mean, for instance, that placement practices fail to account for populations such as Generation 1.5 students (Di Gennaro, 2008), or that they use labels such as “non-native speaker” that are deficit epithets to students, rather than a more additive label such as multilingual student (Ruecker, 2011, p. 104). In sum, we do not yet have sufficient research regarding connections between writing placement and opportunity and identity (Inoue & Poe, 2012), nor about the validity of writing placement practices with respect to multiple domains of writing (MacArthur & Graham, 2008). More research on writing placement assessments, and more people in writing programs with training in assessment, will help fill these gaps. Huot has in the past described an “alarming” landscape characterized by a “dearth of research and theory in writing assessment and the lack of qualified personnel who direct writing placement programs” (Huot, 1994, p. 61). Crusan (2002) has posed a more specific concern about the dearth of qualified personnel to help guide writing placement decisions impacting English language learners. As research on writing placement assessments continues, we need assessment validation research—what Cronbach (1988) pointedly labels “validity arguments”—that links concepts, evidence, social and personal consequences, and values. The reviews below contribute to this endeavor by making several writing placement tools more transparent, and by presenting them in such a way that they can be critically evaluated and compared. The authors have reviewed five writing course placement tools, ranging from the national to the local: (1) Directed Self-Placement, a popular approach focused on student choices that encompasses several writing placement options; (2) ACCUPLACER, a widely-used automated writing placement tool developed by the College Board; (3) Smarter Balanced, a secondary summative assessment aligned with Common Core Standards used to test career and college readiness; (4) The Test of English as a Foreign Language (TOEFL), an assessment designed by the Educational Testing Service sometimes used as a writing placement tool for multilingual students; and (5) the University of Utah Writing Placement System, a locally-designed writing placement approach. Together, these reviews provide a valuable look at the possibilities and limitations of five placement assessments, four of which are taken by millions of students each year and one of which instead aims to be more local and circumscribed. Four of the reviews are written by doctoral students at the University of Michigan (UM), who are studying writing assessment as part of their training in English and Education and are mentored by the interdisciplinary team of Anne Ruggles Gere, UM Gertrude Buck Collegiate Professor and Director of the Sweetland Center for Writing, and Anne Curzan, UM Geneva Smitherman Collegiate Professor and Associate Dean for Humanities. The review of TOEFL is written by Dr. Jon Smart of Wake Forest University, whose expertise as both a writing instructor for multilingual writers as well as a staff member in the university’s Global Programs office allow him to appreciate the faculty and administrative sides of writing placement choices. Below, I briefly introduce each review and emphasize how it achieves the Tools and Tech Forum goals of critical and conscientious review. I want in particular to emphasize that the reviews remind us that every writing placement assessment involves epistemic choices: each offers a particular view of student writing, what we can know about it, and what we cannot know about it. To this end, you will see that the authors strive to highlight the constructs of writing entailed in each placement approach. Some assessments, for instance, include indirect measures of writing ability, such as the ACCUPLACER Next-Generation Writing Test; and 64 Assessing Writing 41 (2019) 63–67 all four include the possibility of direct measures of writing ability; and each approach constitutes particular types of writing knowledge and conclusions about it. For example, an indirect assessment by ACCUPLACER that foregrounds prescriptive grammatical rules about written English will offer very different information about students than a direct assessment by the University of Utah that asks student to write a persuasive, essayistic argument (the latter, too, will offer different information than a constructed response task that asks students to write an analytic report). In a final example, all of the writing placement tools reviewed foreground “standard English,” a point explicitly noted in the reviews. I have asked the reviewers to refer to this register as “standardized English” in order to use a label that underscores that conventional academic written English is not an inherent standard for “good writing” but is, like standardized writing tasks, a widely-used construct that values certain linguistic choices rather than others. In any one of these examples, then, the assessment emphasizes specific values and aspects of writing. In turn, those writing constructs repeatedly emphasized in the assessments will influence student, instructor, and administrator understanding of and students’ writing knowledge. The reviews therefore offer needed attention to what we know about writing placement tools, helping us consider how assessment tools enact writing assessment “as a frame (a structure) and a framing process (an activity)” (O’Neill, 2012, p. 442). By ensuring attention to these frames and framing processes, we help can support ethical, valid assessments at a given institution for given student populations. By this I mean that such efforts can help us use writing placement in ways that hold us accountable, by ensuring we are aware, intentional, and critical about the constructs of writing we emphasize and leave out, and to what end. This attentiveness can help ensure as well that instruction and assessment are clearly connected. As part of ethical assessments that make expectations transparent rather than tacit, such connections are an important response to calls for social justice in writing assessment (Poe et al., 2018). In other words, ethical, valid assessments are characterized by transparency about goals, instruction, and expectations and consistency across assessment design, interpretation, and consequences (Kane, 2016a, 2016b; Poe, 2014). The four reviews below help highlight that there are many possibilities for achieving or eliding these goals when it comes to college writing placement. 3. Four tools for student writing placement First, Andrew Moos and Kathryn Van Zanen review Directed Self-Placement (DSP), a popular approach to writing placement that invites students to self-select their writing course based on information from a task and/or questions. Moos and Van Zanen provide a useful overview of DSP options and research, including studies that indicate the clear value of DSP for marginalized student populations. At the same time, Moos and Van Zanen caution that, while DSP “is a promising option for institutions seeking to better empower students in the placement process,” instructors and administrators must remain committed to additional research and the need for locally-responsive DSP design. Moos and Van Zanen accordingly call for more information that will help make DSP systems as valid and equitable as possible. To this end, in their “limitation and future research section,” Moos and Van Zanen outline a list of considerations for institutions and scholars examining and using writing placement in ways that are attentive to students’ needs. While DSP depends on local student input, the second and third reviews outline placement processes that rely on standardized, externally-evaluated assessments. In the second review, Ruth Li and Sarah Hughes outline the College Board’s ACCUPLACER tool, a widely-used automated writing assessment that includes a multiple-choice test and on-demand essay. Li and Hughes underscore that this tool offers efficiency, in the sense that it does not include the time- and labor-intensive aspects of writing assessments that rely on human evaluators. But they also review research showing that ACCUPLACER adversely impacts the placement of women and students of color, thereby failing ethical assessment standards that ensure the advancement of opportunity for all students. Li and Hughes also caution that ACCUPLACER risks “detect[ing] prescriptively sanctioned English without flexibility or consciousness of rhetorical nuance.” Here and elsewhere, Li and Hughes draw useful attention to the importance of connecting assessment expectations and locally-valued writing constructs. That is to say, if a local context values a recursive process of writing in which students revise their ideas, and an assessment requires a timed writing task with no formal revision, then that assessment will not provide the construct evidence, nor, presumably, extrapolation or decision evidence, called for in writing placement assessments. Li and Hughes do note that ACCUPLACER can be used as only one of multiple measures used in students writing placement; the third review offers a more detailed review of a placement tool that draws on multiple measures. Kendon Smith and Kelly Wheeler’s “Using Smarter Balanced Grade 11 Summative Assessment in College Writing Placement” reviews a multiple measures approach to writing placement. They suggest that such an approach “offers students a variety of ways to demonstrate their college readiness,” and that it offers placement advisors information to help students make decisions about writing courses suited to them. Accordingly, Smith and Wheeler underscore that such a multiple measures approach helps increase the chances for students to place accurately. At the same time, they caution that Smarter Balanced makes it difficult to determine the validity of any one measure, in that it cannot be disaggregated from the others. In other words, because the direct assessment of the performance task is “collapsed with the indirect assessment of multiple choice questions,” Smith and Wheeler caution that that single score “may not be indicative of a student’s writing ability and may result in under-placement of students.” Thus Smith and Wheeler outline important possibilities and limitations to this multiple measures approach. They close by calling for more research about the predictive value of Smarter Balanced assessments, with particular attention to students’ transition from high school to college or career writing. In the fourth review, Jon Smart’s review of the use of TOEFL scores brings attention to a tool used specifically in the writing placement of international and/ or English language learner students entering English-medium universities. While the TOEFL is also often used for these students in university admissions, Smart reviews the use of the TOEFL as a part of determining their placement in an institution’s writing curriculum. Sometimes, the writing courses into which these students might be placed are specifically 65 Assessing Writing 41 (2019) 63–67 designed for students whose native or most proficient language is not English; in those cases, the TOEFL score is used to determine whether students might benefit from an English learner writing course. Other times, the TOEFL score is used instead as a measure like an SAT score, in order to place students into writing courses available for all students at an institution. As Smart notes, authentic language data from a range of university community members was used in the corpus-based development of tasks for the TOEFL. This means that the assessment aims to support a construct of language proficiency grounded in authentic language use. Still, Smart underscores that “little attention is paid to socially-driven language variation that non-native speakers may encounter in an Englishmedium university,” a useful reminder of the boundaries of any writing construct and therefore writing assessment tool. In the final review, Crystal Zanders and Emily Wilson delineate the University of Utah’s writing placement system, offering a helpful example of a locally-controlled placement process. As their review shows, the University of Utah’s writing placement system also offers a bit of an amalgamation of several of the other reviewed tools. It features standardized tests like the SAT or ACT (which help determine who takes the Writing Placement Exam), it relies on multiple measures, and it includes a holistically-scored writing task. Wilson and Zanders’ review highlights the value of having local writing instructors evaluate the students’ written responses, a practice that poses possibilities for construct validity as well as connection and consistency across writing assessment and instruction. But Zanders and Wilson they also note the challenges that such an approach presents in terms of scalability. In addition, Haswell and Elliot (2019) clarifies several considerations regarding holistic scoring, an approach, like any assessment, that is “entangled without recourse to its human enactments.” Together, these reviews offer valuable examples of how different writing assessments—in this case, assessments that determine the very instruction and assessment that students will and will not encounter next—include possibilities and limitations. There are a variety of reasons that an institution may select one and not another. But whatever placement process is designed, that process will shape student writing and what we know about it. 4. Supporting informed decisions regarding student writing placement The authors have organized their reviews similarly, with attention to connections and distinctions across placement tools, in order to aid readers’ ability to compare and contrast placement assessments. More specifically, each review opens with an introduction to key details as well as a description of possibilities enabled by the placement tool. Each review likewise draws attention to connections between the reviewed tool and to other research, placement tools, constructs, providing added reference for various options confronted by writing instructors and administrators. Each review also attends explicitly to potential limitations of each tool; relatedly, each review reflects on potential future developments and research. A final note is that, though English language learner students are often placed by them, writing placement assessments are distinct from those assessments that measure writing proficiency. I am eager to have a Tools & Tech Forum dedicated to reviews of writing proficiency tools soon. To this end, I welcome related inquiries via email or submissions through the Assessing Writing interface. In closing, I extend my thanks to these authors for their important work to illuminate writing placement tools and how they shape student opportunity and what we know about student writing. And as ever, I thank you, readers, for your time and dedication to examining writing assessment tools and tech. Conflict of interest Nothing declared. References Aull, L. L. (2015). First-year university writing: A corpus-based study with implications for pedagogy. London: Palgrave Macmillan. Behizadeh, N., & Engelhard, G. (2011). Historical view of the influences of measurement and writing theories on the practice of writing assessment in the United States. Assessing Writing, 16(3), 189–211. Cronbach, L. J. (1988). Five perspectives on validity argument. Test Validity, 3–17. Crusan, D. (2002). An assessment of ESL writing placement assessment. Assessing Writing, 8(1), 17–30. Di Gennaro, K. (2008). Assessment of Generation 1.5 learners for placement into college writing courses. Journal of Basic Writing (CUNY), 27(1), 61–79. Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012a). Placement of students into first-year writing courses. Research in the Teaching of English, 46(3), 285–313. Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012b). Placement of students into first-year writing courses. Research in the Teaching of English, 285–313. Gere, A. R., Aull, L., Green, T., & Porter, A. (2010). Assessing the validity of directed self-placement at a large university. Assessing Writing, 15(3), 154–176. https://doi. org/10.1016/j.asw.2010.08.003. Gere, A. R., Aull, L. L., Lancaster, Z., Perales Escudero, M., & Vander Lei, E. (2013). Local assessment: Using genre analysis to validate directed self-placement. College Composition and Communication, 4(64). Haswell, R. (2005). Post-secondary entrance writing placement. CompPile March. Haswell, R., & Elliot, N. (2019). Early Holistic Scoring of Writing: A Theory, a History, a Reflection. Louisville, Colorado: Utah State University Press. Huot, B. (1994). A survey of college and university writing placement practices. WPA: Writing Program Administration, 17(3), 49–65. Inoue, A. B., & Poe, M. (2012). Race and writing assessment. Studies in composition and rhetoric, Vol. 7. ERIC. Kane, M. T. (2016a). Validation strategies: Delineating and validating proposed interpretations and uses of test scores. Handbook of test development64–80. Kane, M. T. (2016b). Explicating validity. Assessment in Education: Principles, Policy & Practice, 23(2), 198–211. Kelly-Riley, D., & Whithaus, C. (2019). Editors’ introduction: Special issue on two-year college writing placement. Journal of Writing Assessment, 12(1). Kokhan, K. (2013). An argument against using standardized test scores for placement of international undergraduate students in English as a Second Language (ESL) courses. Language Testing, 30(4), 467–489. Leki, I. (1991). A new approach to advanced ESL placement testing. WPA: Writing Program Administration, 14(3), 53–68. MacArthur, C. A., & Graham, S. (2008). Writing research from a cognitive perspective. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of writing research. Guilford Press. 66 Assessing Writing 41 (2019) 63–67 O’Neill, P. (2012). How does writing assessment frame college writing? Writing assessment in the 21st century: Essays in honor of Edward M. White439–456. Pajares, F., & Valiante, G. (2006). Self-efficacy beliefs and motivation in writing development. Handbook of writing research158–170. Peckham, I. (2009). Online placement in first-year writing. College Composition and Communication, 517–540. Poe, M. (2014). The consequences of writing assessment. Research in the Teaching of English, 48(3), 271–275. Poe, M., Inoue, A. B., & Elliot, N. (Eds.). (2018). Writing assessment, social justice, and the advancement of opportunity. Fort Collins, Colorado: The WAC Clearinghouse and University Press of Colorado. Ruecker, T. (2011). Improving the placement of L2 writers: The students’ perspective. WPA: Writing Program Administration-Journal of the Council of Writing Program Administrators, 35(1). Toth, C., & Aull, L. (2014). Directed self-placement questionnaire design: Practices, problems, possibilities. Assessing Writing, 20(0), 1–18. https://doi.org/10.1016/j. asw.2013.11.006. Williams, J. (1995). ESL composition program administration in the United States. Journal of Second Language Writing, 4(2), 157–179. Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50(3), 483–503. Laura L. Aull Wake Forest University, United States E-mail address: aulll@wfu.edu. 67