Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Automatically Recommending Peer Reviewers in Modern Code Review

Published: 01 June 2016 Publication History

Abstract

Code review is an important part of the software development process. Recently, many open source projects have begun practicing code review through “modern” tools such as GitHub pull-requests and Gerrit. Many commercial software companies use similar tools for code review internally. These tools enable the owner of a source code change to request individuals to participate in the review, i.e., reviewers. However, this task comes with a challenge. Prior work has shown that the benefits of code review are dependent upon the expertise of the reviewers involved. Thus, a common problem faced by authors of source code changes is that of identifying the best reviewers for their source code change. To address this problem, we present an approach, namely cHRev, to automatically recommend reviewers who are best suited to participate in a given review, based on their historical contributions as demonstrated in their prior reviews. We evaluate the effectiveness of cHRev on three open source systems as well as a commercial codebase at Microsoft and compare it to the state of the art in reviewer recommendation. We show that by leveraging the specific information in previously completed reviews (i.e.,quantification of review comments and their recency), we are able to improve dramatically on the performance of prior approaches, which (limitedly) operate on generic review information (i.e., reviewers of similar source code file and path names) or source coderepository data. We also present the insights into why our approach cHRev outperforms the existing approaches.

Cited By

View all
  • (2025)Code context-based reviewer recommendationFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-023-3256-919:1Online publication date: 1-Jan-2025
  • (2024)Fine-Tuning Large Language Models to Improve Accuracy and Comprehensibility of Automated Code ReviewACM Transactions on Software Engineering and Methodology10.1145/369599334:1(1-26)Online publication date: 14-Sep-2024
  • (2024)Unity Is Strength: Collaborative LLM-Based Agents for Code Reviewer RecommendationProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695291(2235-2239)Online publication date: 27-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering  Volume 42, Issue 6
June 2016
100 pages

Publisher

IEEE Press

Publication History

Published: 01 June 2016

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Code context-based reviewer recommendationFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-023-3256-919:1Online publication date: 1-Jan-2025
  • (2024)Fine-Tuning Large Language Models to Improve Accuracy and Comprehensibility of Automated Code ReviewACM Transactions on Software Engineering and Methodology10.1145/369599334:1(1-26)Online publication date: 14-Sep-2024
  • (2024)Unity Is Strength: Collaborative LLM-Based Agents for Code Reviewer RecommendationProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695291(2235-2239)Online publication date: 27-Oct-2024
  • (2024)MULTICR: Predicting Merged and Abandoned Code Changes in Modern Code Review Using Multi-Objective SearchACM Transactions on Software Engineering and Methodology10.1145/368047233:8(1-44)Online publication date: 30-Jul-2024
  • (2024)ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality EstimationProceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings10.1145/3639478.3643111(362-363)Online publication date: 14-Apr-2024
  • (2024)Code Impact Beyond Disciplinary Boundaries: Constructing a Multidisciplinary Dependency Graph and Analyzing Cross-Boundary ImpactProceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice10.1145/3639477.3639726(122-133)Online publication date: 14-Apr-2024
  • (2024)Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical StudyProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3623306(1-13)Online publication date: 20-May-2024
  • (2024)Characterizing the Prevalence, Distribution, and Duration of Stale Reviewer RecommendationsIEEE Transactions on Software Engineering10.1109/TSE.2024.342236950:8(2096-2109)Online publication date: 1-Aug-2024
  • (2024)Factoring Expertise, Workload, and Turnover Into Code Review RecommendationIEEE Transactions on Software Engineering10.1109/TSE.2024.336675350:4(884-899)Online publication date: 23-Feb-2024
  • (2024)Distilling Quality Enhancing Comments From Code Reviews to Underpin Reviewer RecommendationIEEE Transactions on Software Engineering10.1109/TSE.2024.335681950:7(1658-1674)Online publication date: 1-Jul-2024
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media