Juxtapeer: Comparative peer review yields higher quality feedback and promotes deeper reflection
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018•dl.acm.org
Peer review asks novices to take on an evaluator's role, yet novices often lack the
perspective to accurately assess the quality of others' work. To help learners give feedback
on their peers' work through an expert lens, we present the Juxtapeer peer review system for
structured comparisons. We build on theories of learning through contrasting cases, and
contribute the first systematic evaluation of comparative peer review. In a controlled
experiment, 476 consenting learners across four courses submitted 1,297 submissions …
perspective to accurately assess the quality of others' work. To help learners give feedback
on their peers' work through an expert lens, we present the Juxtapeer peer review system for
structured comparisons. We build on theories of learning through contrasting cases, and
contribute the first systematic evaluation of comparative peer review. In a controlled
experiment, 476 consenting learners across four courses submitted 1,297 submissions …
Peer review asks novices to take on an evaluator's role, yet novices often lack the perspective to accurately assess the quality of others' work. To help learners give feedback on their peers' work through an expert lens, we present the Juxtapeer peer review system for structured comparisons. We build on theories of learning through contrasting cases, and contribute the first systematic evaluation of comparative peer review. In a controlled experiment, 476 consenting learners across four courses submitted 1,297 submissions, 4,102 reviews, and 846 self assessments. Learners assigned to compare submissions wrote reviews and self-reflections that were longer and received higher ratings from experts than those who evaluated submissions one at a time. A second study found that a ranking of submissions derived from learners' comparisons correlates well with staff ranking. These results demonstrate that comparing algorithmically-curated pairs of submissions helps learners write better feedback.
ACM Digital Library