The aim of this research was to analyze and compare analytic rubric and general impression scoring in peer assessment. A total of 66 university students participated in the study and six of them were chosen as peer raters on a voluntary... more
The aim of this research was to analyze and compare analytic rubric and general impression scoring in peer assessment. A total of 66 university students participated in the study and six of them were chosen as peer raters on a voluntary basis. In the research, students were supposed to prepare a sample study within the scope of scientific research methods course and were also expected to present their studies in class. While the students were giving a presentation, their course instructor and peer raters conducted scoring, firstly by using the analytic rubric and subsequently by using general impressions. Collected data were analyzed using the Rasch model. Consequently, it was found that students were distinguished from one another at a highly reliable rate using both scoring methods. Additionally, it was discovered that the differences between students' ability levels were better revealed when analytic rubric was used. It was ascertained that there was a high level positive correlation between the ability estimations obtained from the scores performed by the peers and the instructor, regardless of the scoring method used. Finally, it was determined that ability estimations, corresponding peer raters' analytic rubric and general impression scoring, held a positive and highly strong relation.
Playing a vital role in assuring reliability of language performance assessment, rater training has been a topic of interest in research on large-scale testing. Similarly, in the context of VSTEP, the effectiveness of the rater training... more
Playing a vital role in assuring reliability of language performance assessment, rater training has been a topic of interest in research on large-scale testing. Similarly, in the context of VSTEP, the effectiveness of the rater training program has been of great concern. Thus, this research was conducted to investigate the impact of the VSTEP speaking rating scale training session in the rater training program