Papers by Dennis Koyama, Ph.D.
Teaching in Higher Education, 2023
In this Points of Departure, we present four vignettes from our teaching to argue that a disposit... more In this Points of Departure, we present four vignettes from our teaching to argue that a dispositional view of ecological literacy is needed for Justice-Based Environmental Sustainability (JBES). Having such a perspective can enable educators to encourage students to learn to develop the habit to critically problematize sustainability (e.g. ‘the sustainability of what values and actions’) and development (e.g. ‘development for and by whom’). From this perspective, people need to develop an inclination to act for the larger planetary good (i.e. praxis) and towards JBES. We also argue that this view of ecological literacy will require teachers to critically engage with their lessons to ensure that lessons are crafted to counter socio-environmental injustices that can lead to JBES action throughout one’s life. Taking a dispositional view, then, means the ecological literacies reside in the learner and its teaching emerges as something akin the education of moral character.
Bookmarks Related papers MentionsView impact
Journal of Second Language Writing, 2022
The present study, framed by complex dynamic systems theory (de Bot et al., 2007; Larsen-Freeman ... more The present study, framed by complex dynamic systems theory (de Bot et al., 2007; Larsen-Freeman & Cameron, 2008), explores the relationship between co-adaptation and journaling. Primarily viewed as a theoretical, explanatory mechanism, co-adaptation—a process describing how components of a dynamic system (e.g., a classroom) interact and reorganize their behavior in mutually influential ways to solve local problems (e.g., communication)—also holds promise as a teaching tool for developing L2 academic literacy (Fogal et al., 2020). To investigate this underexamined possibility, the present classroom-based study engages with journaling as a form of educational practice for academic literacy development. This study examines context-related affordances and an iterative series of daily journal entries and instructor reflections through a thematic analysis in line with design-based research and based on Baba (2020). Findings highlight how co-adaptation through journaling can help describe developmental processes and assist teachers and learners in developing academic literacy practices. This work contributes to discussions about how to conceptualize and operationalize classrooms as co-adaptive and relational spaces and invites instructors to reimagine journaling and journaling feedback as a resource for engagement and co-adaptation in the teaching-learning environment.
Bookmarks Related papers MentionsView impact
Innovation in Language Learning and Teaching, 2021
Reflective practice has long been considered an important part of professional development for ed... more Reflective practice has long been considered an important part of professional development for educators; however, accounts of utilizing reflective practice with groups of experienced teachers remain scarce. We consider reflective practice to be an important means of fostering professional discourse among experienced teachers regarding their pedagogical beliefs and practices. To that end, this paper describes a reflective practice innovation introduced in an undergraduate English composition program in a Japanese university. In what follows we, as experienced teachers, detail how a reflective-practice routine (RPR) was established and used to evaluate the efficacy of existing curricular materials to inform adjunct-faculty onboarding and professional development. In closing, we make several recommendations related to scheduling, maintaining focused and constructive interactions when implementing an RPR, and we provide examples of how the results of the RPR were applied to improve our onboarding process, teaching practices, and course materials.
Bookmarks Related papers MentionsView impact
Evaluations of teaching effectiveness have taken many forms over the years, but none have been as... more Evaluations of teaching effectiveness have taken many forms over the years, but none have been as persistent or commonplace as student ratings of instruction (SRI). SRIs have become a fundamental component of evaluating faculty effectiveness in higher education. Support for SRIs comes from end-users of the data who believe that students are uniquely positioned to evaluate faculty based on their experiences and perceptions of the instruction they received. Pragmatically, institutions tend to rely on SRI results for teacher evaluations because they reason that students learn more from faculty who are highly rated by students. However, to what degree is this enthusiasm warranted? Are SRIs reliable, valid, or trustworthy at all?
The main goals of this chapter are to present an overview of SRI research, explain ways of preparing students for SRIs (both formative and summative), and present methods for teachers to use when examining the SRI data. To these ends, this chapter will briefly review the SRI research, including evidence for the value of SRI data despite commonly held misconceptions about the possible influence of factors such as class size, GPA, gender, and professor rank. Attention is then given to understanding how to improve responses to questions that tap constructs students are unlikely to be readily able to respond to, such as "Did this course improve your critical thinking skills?" and to general agreement questions about learning, such as "The pacing of the materials was appropriate." Techniques for interpreting constructed responses from students, such as "Stop lecturing!" are also provided. Finally, the paper moves on to highlighting the connection between collecting and acting on formative classroom surveys that support positive transfer to end-of-term SRIs and offers methods to analyze SRIs individually as well as outlining an approach to teacher development with SRI data and teacher-centered consultations by PD programs.
Bookmarks Related papers MentionsView impact
Studies in Self-Access Learning Journal , 2020
The Covid-19 pandemic has disrupted traditional approaches to education and forced educators to a... more The Covid-19 pandemic has disrupted traditional approaches to education and forced educators to adopt and adapt technologies to allow institutions to remain open, offer courses and other services to enable students to continue their education. This rapid shift to online teaching and learning has shone a light on the need for institutions to support students in working out how to maintain autonomy through meaningful interaction in the online world. In this paper we discuss the transition of a face-to-face university writing center to a synchronous online writing center that is hosted in the videoconferencing application Zoom. In doing this we explain the rationale that informed our thinking throughout the transition process and how sound pedagogical principles and a focus on the student experience guided our decision-making. Preliminary findings regarding how self-regulated learning was maintained and nurtured in the virtual writing center are presented and discussed.
Keywords: Japanese university, self-regulated learning, writing center, Zoom
Bookmarks Related papers MentionsView impact
How can feedback become a productive resource for students? Much of the research investigating th... more How can feedback become a productive resource for students? Much of the research investigating the role of feedback in second language (L2) writing has set out to find an answer to this question. Based on the principle that feedback is given to students as a means of providing useful information to improve their writing (Bitchener, 2008, 2009; Hanaoka & Izumi, 2012), the discussion on feedback includes the idea that learners will transfer knowledge from feedback to improve subsequent writing (Hyland, 1998; Storch & Wigglesworth, 2010). When learners apply feedback to their subsequent writing, they are using collected knowledge, which is the essence of learning transfer (Schwartz, Bransford, & Sears, 2005). Unfortunately, no method of writing feedback has been deemed the frontrunner for improving learner texts (Ferris & Roberts, 2001; Hyland & Hyland, 2006; Storch & Wigglesworth, 2010) or for helping learners transfer writing knowledge across writing situations (James, 2006a,b, 2008, 2009, 2010). While this outlook may seem bleak for writing instructors, recent research provides evidence for presenting learners with expert models as a fruitful way of offering feedback.
Language researchers have shown that using models can provide learners with opportunities to engage with the language in the model, encouraging them extract useful language from it (Hanaoka & Izumi, 2012). Naturally, the more language the learners notice in the model, the more likely they will recall that language and content elements at a later time (Schmidt, 1990; Schmidt and Frota, 1986). To increase this chance for noticing language, Watson (1982) suggests learners discuss the model in pairs or small groups. Other L2 researchers share Watson’s interest in collaboration in L2 writing classrooms and advocate for engaging learner collaboration in all stages of composition. This collaborative approach identifies learners as a language resource via spontaneous peer feedback (Fernández Dobao & Blum, 2013; Storch, 2005, 2013; Watanabe & Swain, 2007; Wigglesworth & Storch, 2010). Collectively, these perspectives suggest feedback can be crafted by moderating the written form of feedback (e.g., presenting expert models) and by incorporating learner-to-learner interaction (e.g., collaborative tasks). Investigating factors such as these might provide theoretical and practical insight into how learners transfer feedback.
This dissertation explored the usefulness of an expert model and a structured task in an L2 writing classroom. Two interaction levels—individual and collaborative—were examined for their facility of descriptive language related to data integration of graphical information from model feedback in a controlled pre/posttest experiment with international university students enrolled in an L2 English composition course. Two approaches to coding the data were taken. The first approach employed a coding scheme that provided a percentage of content overlap with the expert model—an indicator of factual recall and transfer. This was done by a line-by-line coding scheme (Glaser, 1978). The second approach considered how well the essays “fit” the expected data integrations provided in the model—an indicator of transfer of deep writing structure based on the relative balance of global versus local integrations. This was calculated with a Chi-square test of fit. The transfer of deep structures was further measured through an analysis of if students could identify a data interaction that did not exist in the model description. The results showed that learners in the dyad condition significantly outperformed learners in the individual and control conditions on content overlap and expected data integrations. The dyad condition also surpassed a truth-wins comparison, which provides a comparison of actual dyads to the theoretical pooling of knowledge individuals (Lorge & Solomon, 1955), and dyads were the only condition to include the target transfer item in their posttest revisions, indicating dyads were able to understanding complex data integrations in ways not available to learners in the individual and control conditions.
Bookmarks Related papers MentionsView impact
Language Learning & Technology, Feb 2016
Multiple-choice formats remain a popular design for assessing listening comprehension, yet no con... more Multiple-choice formats remain a popular design for assessing listening comprehension, yet no consensus has been reached on how multiple-choice formats should be employed. Some researchers argue that test takers must be provided with a preview of the items prior to the input (Buck, 1995; Sherman, 1997); others argue that a preview may decrease the authenticity of the task by changing the way input is processed (Hughes, 2003).
Using stratified random sampling techniques, more and less proficient Japanese university English learners (N = 206) were assigned one of three test conditions: preview of question stem and answer options (n = 67), preview of question stem only (n = 70), and no preview (n = 69). A two-way ANOVA, with test condition and listening proficiency level as independent variables and score on the multiple-choice listening test as the dependent variable, indicated that the amount of item preview affected test scores but did not affect high and low proficiency students’ scores differently. Item-level analysis identified items that were harder or easier than expected for one or more of the conditions, and the researchers posit three possible sources for these unexpected findings: 1) frequency of options in the input, 2) location of item focus, and 3) presence of organizational markers.
Bookmarks Related papers MentionsView impact
The purpose of this study was to determine the extent to which performance on the TOEFL iBT speak... more The purpose of this study was to determine the extent to which performance on the TOEFL iBT speaking section is associated with other indicators of Japanese university students’ abilities to communicate orally in an academic English environment and to determine which components of oral ability for these tasks are best assessed by TOEFL iBT. To achieve this aim, TOEFL iBT speaking scores were compared to performances on a group oral discussion, picture and graph description, and prepared oral presentation tasks, and their component scores of pronunciation, fluency, grammar/vocabulary, interactional competence, descriptive skill, delivery skill, and question answering. Participants were Japanese university students (N = 222), who were English majors in a Japanese university. Pearson product–moment correlations, corrected for attenuation, between scores on the speaking section of TOEFL iBT and the three university tasks indicated strong relationships between the TOEFL iBT speaking scores and the three university tasks and high or moderate correlations between the TOEFL iBT speaking scores and the components of oral ability. For the components of oral ability, pronunciation, fluency, and vocabulary/grammar were highly associated with TOEFL iBT speaking scores while interactional competence, descriptive skill, and delivery skill were moderately associated with TOEFL iBT speaking scores. The findings suggest that TOEFL iBT speaking scores are good overall indicators of academic oral ability and that they are better measures of pronunciation, fluency and vocabulary/grammar than they are of interactional competence, descriptive skill, and presentation delivery skill.
Bookmarks Related papers MentionsView impact
Test takers should have a voice in testing practices (Mathew, 2004). However, when incorporating ... more Test takers should have a voice in testing practices (Mathew, 2004). However, when incorporating their input, systematic processes to ensure the validity of testing practices must be followed. Such processes allow for test development to be a more democratic process (Shohamy, 2001), without sacrificing the value of the resulting inferences made from the test scores. This article describes a case study that incorporated the desires of test takers to change the procedures of a group oral discussion test in a university English as a Foreign Language program. A study was designed to determine the extent to which the proposed changes would threaten the validity of the testing process. Specifically, the procedures for the group oral were altered to investigate the effect of interlocutor familiarity. Students were randomly assigned to class-familiar (n = 146) and class-unfamiliar (n = 159) groups to identify to what extent group familiarity affected test takers’ scores in the four assessed categories: pronunciation, fluency, lexis and grammar, and communication skills. For the two groups, no statistically significant difference in scores was found, and score reliability estimates were similar. The implications of the findings are addressed in terms of recommendations for using stakeholder input in the assessment design process.
Bookmarks Related papers MentionsView impact
Studies in Linguistics and Language Teaching, 2012
A major priority of the Kanda English Placement Test (KEPT) research team is to be active in the ... more A major priority of the Kanda English Placement Test (KEPT) research team is to be active in the continual evaluation of the Kanda Assessment of Communicative English (KACE) tasks and their value as an institutional proficiency test. One of the major purposes of having tasks like the Group Oral assessment is to serve as a direct supporter to some of our most important curriculum goals here at Kanda, namely communication skills and group conversation management. The Group Oral provides us with information about student abilities in this area, which then enhances our program’s ability to make positive outcomes related to pushing these proficiencies, including materials improvement, enhanced student motivation, and placement decisions. Now, these are all great things for a program to have in theory, but there is a danger in assuming that simply having a test allows a program to automatically reap the benefits of that test. As responsible testers we must always be aware that the true usefulness of an assessment to perform any given function is dependent on how strongly an argument can be made that the assessment is measuring what it is intended to, and does so reliably and accurately. This includes an ongoing assessment of specific quality measures of the test as a language measurement tool, including test validity, practicality, and usefulness. The present study investigates the effect on a test-taker’s ratings of making an Opening Gambit, or taking the first turn in a group conversation, and the impact such a finding would have on the validity and usefulness of the test.
Bookmarks Related papers MentionsView impact
Studies in Linguistics and Language Teaching, 2010
Many writing instructors believe that the better the quality of the feedback, the more the studen... more Many writing instructors believe that the better the quality of the feedback, the more the students’ writing will improve. However, there are many factors that combine to determine the improvement a student makes. One important factor to take into consideration when planning any kind of instruction is confidence. Students’ confidence levels can be affected by a variety of variables but the feedback they receive may be one of the most important variables affecting how confident a student feels about their writing skills. The present study examines the relationship between the frequency of written feedback (self-feedback, peer-feedback and teacher-feedback) students receive on their writing and the changes in their perceptions of their writing ability over a period of one year.
Bookmarks Related papers MentionsView impact
PeerSpectives, 2009
A variety of perspectives can be taken on the topic of motivation in the language learning classr... more A variety of perspectives can be taken on the topic of motivation in the language learning classroom depending on the context and purpose of the learning. Some scholars say that there has come a shift in the standards and expectations in teaching practices (Harasim, 2000) because of the exponential growth of technology and its proliferation in schools. While some teachers may worry about the effectiveness of technology, such as online learning, Zhao (2005) conducted a meta-analysis that illuminated the effectiveness technology based language instruction as equal to that of the traditional classroom, finding “an overwhelmingly positive effect of technological applications on language learning” (2005, p. 30). This study investigated the positive changes in the online class participation of a demotivated ESL student across one academic year of study in an ESL university context.
Bookmarks Related papers MentionsView impact
Bookmarks Related papers MentionsView impact
Writing placement tests play a role in the academic life of many EFL students. This study investi... more Writing placement tests play a role in the academic life of many EFL students. This study investigated a university’s holistic EFL writing placement practices. The purpose of the study was five-fold, to identify: (a) how many essays needed additional readers; (b) how many augmented ratings (ratings that use a plus or a minus) were used; (c) how many placements were made with augmented ratings only; (d) if the final placement of essays would change if the seven-point augmented rating system is merged into a three-point system using only cut-point scores; and finally, (e) What differences exist, if any, between the Pearson’s correlation coefficient and inter-ratings agreement ratios for reporting the reliability of the tests? Findings showed that 35% of all essays rated across two academic terms needed additional readers to make placement decisions, and that approximately 40% of all ratings given across all essays rated were augmented ratings. The findings also suggested that the use of rating agreement ratios is superior to the Pearson product moment correlation coefficient in illuminating the similarities of ratings, rather than a similarity in rating patterns, which supports the conclusions of other researchers working with language performance assessments (e.g., Halleck, 1995, 1996; Kenyon & Tschirner, 2000; Norris, 2001; Thompson, 1995, 1996). The implications of this study are discussed in terms of recommendations to language programs that use holistic writing placement practices
Bookmarks Related papers MentionsView impact
Uploads
Papers by Dennis Koyama, Ph.D.
The main goals of this chapter are to present an overview of SRI research, explain ways of preparing students for SRIs (both formative and summative), and present methods for teachers to use when examining the SRI data. To these ends, this chapter will briefly review the SRI research, including evidence for the value of SRI data despite commonly held misconceptions about the possible influence of factors such as class size, GPA, gender, and professor rank. Attention is then given to understanding how to improve responses to questions that tap constructs students are unlikely to be readily able to respond to, such as "Did this course improve your critical thinking skills?" and to general agreement questions about learning, such as "The pacing of the materials was appropriate." Techniques for interpreting constructed responses from students, such as "Stop lecturing!" are also provided. Finally, the paper moves on to highlighting the connection between collecting and acting on formative classroom surveys that support positive transfer to end-of-term SRIs and offers methods to analyze SRIs individually as well as outlining an approach to teacher development with SRI data and teacher-centered consultations by PD programs.
Keywords: Japanese university, self-regulated learning, writing center, Zoom
Language researchers have shown that using models can provide learners with opportunities to engage with the language in the model, encouraging them extract useful language from it (Hanaoka & Izumi, 2012). Naturally, the more language the learners notice in the model, the more likely they will recall that language and content elements at a later time (Schmidt, 1990; Schmidt and Frota, 1986). To increase this chance for noticing language, Watson (1982) suggests learners discuss the model in pairs or small groups. Other L2 researchers share Watson’s interest in collaboration in L2 writing classrooms and advocate for engaging learner collaboration in all stages of composition. This collaborative approach identifies learners as a language resource via spontaneous peer feedback (Fernández Dobao & Blum, 2013; Storch, 2005, 2013; Watanabe & Swain, 2007; Wigglesworth & Storch, 2010). Collectively, these perspectives suggest feedback can be crafted by moderating the written form of feedback (e.g., presenting expert models) and by incorporating learner-to-learner interaction (e.g., collaborative tasks). Investigating factors such as these might provide theoretical and practical insight into how learners transfer feedback.
This dissertation explored the usefulness of an expert model and a structured task in an L2 writing classroom. Two interaction levels—individual and collaborative—were examined for their facility of descriptive language related to data integration of graphical information from model feedback in a controlled pre/posttest experiment with international university students enrolled in an L2 English composition course. Two approaches to coding the data were taken. The first approach employed a coding scheme that provided a percentage of content overlap with the expert model—an indicator of factual recall and transfer. This was done by a line-by-line coding scheme (Glaser, 1978). The second approach considered how well the essays “fit” the expected data integrations provided in the model—an indicator of transfer of deep writing structure based on the relative balance of global versus local integrations. This was calculated with a Chi-square test of fit. The transfer of deep structures was further measured through an analysis of if students could identify a data interaction that did not exist in the model description. The results showed that learners in the dyad condition significantly outperformed learners in the individual and control conditions on content overlap and expected data integrations. The dyad condition also surpassed a truth-wins comparison, which provides a comparison of actual dyads to the theoretical pooling of knowledge individuals (Lorge & Solomon, 1955), and dyads were the only condition to include the target transfer item in their posttest revisions, indicating dyads were able to understanding complex data integrations in ways not available to learners in the individual and control conditions.
Using stratified random sampling techniques, more and less proficient Japanese university English learners (N = 206) were assigned one of three test conditions: preview of question stem and answer options (n = 67), preview of question stem only (n = 70), and no preview (n = 69). A two-way ANOVA, with test condition and listening proficiency level as independent variables and score on the multiple-choice listening test as the dependent variable, indicated that the amount of item preview affected test scores but did not affect high and low proficiency students’ scores differently. Item-level analysis identified items that were harder or easier than expected for one or more of the conditions, and the researchers posit three possible sources for these unexpected findings: 1) frequency of options in the input, 2) location of item focus, and 3) presence of organizational markers.
The main goals of this chapter are to present an overview of SRI research, explain ways of preparing students for SRIs (both formative and summative), and present methods for teachers to use when examining the SRI data. To these ends, this chapter will briefly review the SRI research, including evidence for the value of SRI data despite commonly held misconceptions about the possible influence of factors such as class size, GPA, gender, and professor rank. Attention is then given to understanding how to improve responses to questions that tap constructs students are unlikely to be readily able to respond to, such as "Did this course improve your critical thinking skills?" and to general agreement questions about learning, such as "The pacing of the materials was appropriate." Techniques for interpreting constructed responses from students, such as "Stop lecturing!" are also provided. Finally, the paper moves on to highlighting the connection between collecting and acting on formative classroom surveys that support positive transfer to end-of-term SRIs and offers methods to analyze SRIs individually as well as outlining an approach to teacher development with SRI data and teacher-centered consultations by PD programs.
Keywords: Japanese university, self-regulated learning, writing center, Zoom
Language researchers have shown that using models can provide learners with opportunities to engage with the language in the model, encouraging them extract useful language from it (Hanaoka & Izumi, 2012). Naturally, the more language the learners notice in the model, the more likely they will recall that language and content elements at a later time (Schmidt, 1990; Schmidt and Frota, 1986). To increase this chance for noticing language, Watson (1982) suggests learners discuss the model in pairs or small groups. Other L2 researchers share Watson’s interest in collaboration in L2 writing classrooms and advocate for engaging learner collaboration in all stages of composition. This collaborative approach identifies learners as a language resource via spontaneous peer feedback (Fernández Dobao & Blum, 2013; Storch, 2005, 2013; Watanabe & Swain, 2007; Wigglesworth & Storch, 2010). Collectively, these perspectives suggest feedback can be crafted by moderating the written form of feedback (e.g., presenting expert models) and by incorporating learner-to-learner interaction (e.g., collaborative tasks). Investigating factors such as these might provide theoretical and practical insight into how learners transfer feedback.
This dissertation explored the usefulness of an expert model and a structured task in an L2 writing classroom. Two interaction levels—individual and collaborative—were examined for their facility of descriptive language related to data integration of graphical information from model feedback in a controlled pre/posttest experiment with international university students enrolled in an L2 English composition course. Two approaches to coding the data were taken. The first approach employed a coding scheme that provided a percentage of content overlap with the expert model—an indicator of factual recall and transfer. This was done by a line-by-line coding scheme (Glaser, 1978). The second approach considered how well the essays “fit” the expected data integrations provided in the model—an indicator of transfer of deep writing structure based on the relative balance of global versus local integrations. This was calculated with a Chi-square test of fit. The transfer of deep structures was further measured through an analysis of if students could identify a data interaction that did not exist in the model description. The results showed that learners in the dyad condition significantly outperformed learners in the individual and control conditions on content overlap and expected data integrations. The dyad condition also surpassed a truth-wins comparison, which provides a comparison of actual dyads to the theoretical pooling of knowledge individuals (Lorge & Solomon, 1955), and dyads were the only condition to include the target transfer item in their posttest revisions, indicating dyads were able to understanding complex data integrations in ways not available to learners in the individual and control conditions.
Using stratified random sampling techniques, more and less proficient Japanese university English learners (N = 206) were assigned one of three test conditions: preview of question stem and answer options (n = 67), preview of question stem only (n = 70), and no preview (n = 69). A two-way ANOVA, with test condition and listening proficiency level as independent variables and score on the multiple-choice listening test as the dependent variable, indicated that the amount of item preview affected test scores but did not affect high and low proficiency students’ scores differently. Item-level analysis identified items that were harder or easier than expected for one or more of the conditions, and the researchers posit three possible sources for these unexpected findings: 1) frequency of options in the input, 2) location of item focus, and 3) presence of organizational markers.