Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3540250.3549119acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

CommentFinder: a simpler, faster, more accurate code review comments recommendation

Published: 09 November 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Code review is an effective quality assurance practice, but can be labor-intensive since developers have to manually review the code and provide written feedback. Recently, a Deep Learning (DL)-based approach was introduced to automatically recommend code review comments based on changed methods. While the approach showed promising results, it requires expensive computational resource and time which limits its use in practice. To address this limitation, we propose CommentFinder – a retrieval-based approach to recommend code review comments. Through an empirical evaluation of 151,019 changed methods, we evaluate the effectiveness and efficiency of CommentFinder against the state-of-the-art approach. We find that when recommending the best-1 review comment candidate, our CommentFinder is 32% better than prior work in recommending the correct code review comment. In addition, CommentFinder is 49 times faster than the prior work. These findings highlight that our CommentFinder could help reviewers to reduce the manual efforts by recommending code review comments, while requiring less computational time.

    References

    [1]
    Giuliano Antoniol, Gerardo Canfora, Gerardo Casazza, Andrea De Lucia, and Ettore Merlo. 2002. Recovering traceability links between code and documentation. IEEE transactions on software engineering, 28, 10 (2002), 970–983.
    [2]
    Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the International Conference on Software Engineering (ICSE). IEEE Press, San Francisco, CA, USA. 712–721.
    [3]
    Daniel Bakkelund. 2009. An LCS-based string metric. Olso, Norway: University of Oslo.
    [4]
    Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In Proceedings of the International Conference on Software Engineering (ICSE). IEEE, San Francisco, CA, USA. 931–940.
    [5]
    Gabriele Bavota and Barbara Russo. 2015. Four eyes are better than two: On the impact of code reviews on software quality. In Proceedings of IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, San Francisco, CA, USA. 81–90.
    [6]
    Moritz Beller, Alberto Bacchelli, Andy Zaidman, and Elmar Juergens. 2014. Modern code reviews in open-source projects: Which problems do they fix? In Proceedings of the working conference on mining software repositories (MSR). Association for Computing Machinery, New York, NY, USA. 202–211.
    [7]
    Amiangshu Bosu, Michaela Greiler, and Christian Bird. 2015. Characteristics of useful code reviews: An empirical study at microsoft. In Proceedings of the IEEE/ACM Working Conference on Mining Software Repositories (MSR). IEEE, San Francisco, CA, USA. 146–156.
    [8]
    Chun Yong Chong, Patanamon Thongtanunam, and Chakkrit Tantithamthavorn. 2021. Assessing the Students Understanding and their Mistakes in Code Review Checklists–An Experience Report of 1,791 Code Review Checklists from 394 Students. In Proceedings of the International Conference on Software Engineering: Joint Software Engineering Education and Training track (ICSE-JSEET). IEEE, San Francisco, CA, USA. 20–29.
    [9]
    Susan Craw. 2010. Manhattan Distance. Springer US, Boston, MA. 639–639. isbn:978-0-387-30164-8 https://doi.org/10.1007/978-0-387-30164-8_506
    [10]
    Jacek Czerwonka, Michaela Greiler, and Jack Tilford. 2015. Code reviews do not find bugs. how the current code review best practice slows us down. In Proceedings of the IEEE/ACM IEEE International Conference on Software Engineering (ICSE). 2, IEEE, San Francisco, CA, USA. 27–28.
    [11]
    Yuanrui Fan, Xin Xia, David Lo, and Shanping Li. 2018. Early prediction of merged code changes to prioritize reviewing tasks. Empirical Software Engineering, 23, 6 (2018), 3346–3393.
    [12]
    Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, and Daxin Jiang. 2020. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 1536–1547.
    [13]
    Danyel Fisher, Rob DeLine, Mary Czerwinski, and Steven Drucker. 2012. Interactions with big data analytics. Interactions, 19, 3 (2012), 50–59.
    [14]
    Michael Fu and Chakkrit Tantithamthavorn. 2022. LineVul: A Transformer-based Line-Level Vulnerability Prediction. In 2022 IEEE/ACM 19th International Conference on Mining Software Repositories (MSR). IEEE, San Francisco, CA, USA. 608–620.
    [15]
    Michael Fu, Chakkrit Tantithamthavorn, Trung Le, Van Nguyen, and Dinh Phung. 2022. VulRepair: A T5-Based Automated Software Vulnerability Repair. In In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Soft- ware Engineering (ESEC/FSE’22). To Appear.
    [16]
    Wei Fu and Tim Menzies. 2017. Easy over hard: A case study on deep learning. In Proceedings of the Joint Meeting on Foundations of Software Engineering (FSE). Association for Computing Machinery, New York, NY, USA. 49–60.
    [17]
    Chenkai Guo, Hui Yang, Dengrong Huang, Jianwen Zhang, Naipeng Dong, Jing Xu, and Jingwen Zhu. 2020. Review sharing via deep semi-supervised code clone detection. IEEE Access, 8 (2020), 24948–24965.
    [18]
    Anshul Gupta and Neel Sundaresan. 2018. Intelligent code reviews using deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’18) Deep Learning Day. Association for Computing Machinery, New York, NY, United States.
    [19]
    Christoph Hannebauer, Michael Patalas, Sebastian Stünkel, and Volker Gruhn. 2016. Automatically recommending code reviewers based on their expertise: An empirical comparison. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, San Francisco, CA, USA. 99–110.
    [20]
    Vincent J Hellendoorn and Premkumar Devanbu. 2017. Are deep neural networks the best choice for modeling source code? In Proceedings of the Joint Meeting on Foundations of Software Engineering (ESEC/FSE). Association for Computing Machinery, New York, NY, USA. 763–773.
    [21]
    Thong Hoang, Hong Jin Kang, David Lo, and Julia Lawall. 2020. Cc2vec: Distributed representations of code changes. In Proceedings of the ACM/IEEE International Conference on Software Engineering (ICSE). Association for Computing Machinery, New York, NY, USA. 518–529.
    [22]
    Yang Hong, Chakkrit Tantithamthavorn, and Patanamon Thongtanunam. 2022. Where Should I Look at? Recommending Lines that Reviewers Should Pay Attention To. In Proceedings of IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, San Francisco, CA, USA. 1023–1034.
    [23]
    Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436.
    [24]
    Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In Proceedings of the International Conference on Automated Software Engineering (ASE). IEEE, San Francisco, CA, USA. 135–146.
    [25]
    Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Hoa Khanh Dam, and John Grundy. 2020. An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models. IEEE Transactions on Software Engineering (TSE), 48, 1 (2020), 166–185.
    [26]
    Yasutaka Kamei, Emad Shihab, Bram Adams, Ahmed E Hassan, Audris Mockus, Anand Sinha, and Naoyasu Ubayashi. 2012. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering, 39, 6 (2012), 757–773.
    [27]
    Chaiyakarn Khanan, Worawit Luewichana, Krissakorn Pruktharathikoon, Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Morakot Choetkiertikul, Chaiyong Ragkhitwetsagul, and Thanwadee Sunetnanta. 2020. JITBot: An Explainable Just-In-Time Defect Prediction Bot. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, San Francisco, CA, USA. 1336–1339.
    [28]
    Oleksii Kononenko, Olga Baysal, and Michael W Godfrey. 2016. Code Review Quality: How Developers See It. In Proceedings of the international conference on software engineering (ICSE). IEEE, San Francisco, CA, USA. 1028–1038.
    [29]
    Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady. 10, Soviet Union, 707–710.
    [30]
    Zhongxin Liu, Xin Xia, Ahmed E Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural-machine-translation-based commit message generation: how far are we? In Proceedings of the International Conference on Automated Software Engineering (ASE). Association for Computing Machinery, New York, NY, USA. 373–384.
    [31]
    Laura MacLeod, Michaela Greiler, Margaret-Anne Storey, Christian Bird, and Jacek Czerwonka. 2018. Code Reviewing in the Trenches. IEEE Software, 35 (2018), 34–42.
    [32]
    Chandra Maddila, Chetan Bansal, and Nachiappan Nagappan. 2019. Predicting pull request completion time: a case study on large scale cloud services. In Proceedings of acm joint meeting on european software engineering conference and symposium on the foundations of software engineering (ESEC/FSE). Association for Computing Machinery, New York, NY, USA. 874–882.
    [33]
    Suvodeep Majumder, Nikhila Balaji, Katie Brey, Wei Fu, and Tim Menzies. 2018. 500+ Times Faster than Deep Learning: A Case Study Exploring Faster Methods for Text Mining Stackoverflow. In Proceedings of the International Conference on Mining Software Repositories (MSR) (MSR ’18). Association for Computing Machinery, New York, NY, USA. 554–563.
    [34]
    Mika V Mäntylä and Casper Lassenius. 2008. What types of defects are really discovered in code reviews? IEEE Transactions on Software Engineering, 35, 3 (2008), 430–448.
    [35]
    Shane McIntosh and Yasutaka Kamei. 2017. Are fix-inducing changes a moving target? a longitudinal case study of just-in-time defect prediction. IEEE Transactions on Software Engineering (TSE), 44 (2017), 412–428.
    [36]
    Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2014. The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects. In Proceedings of the Working Conference on Mining Software Repositories (MSR). Association for Computing Machinery, New York, NY, USA. 192–201.
    [37]
    Rodrigo Morales, Shane McIntosh, and Foutse Khomh. 2015. Do code review practices impact design quality? a case study of the qt, vtk, and itk projects. In Proceedings of the international conference on software analysis, evolution, and reengineering (SANER). Association for Computing Machinery, New York, NY, USA. 171–180.
    [38]
    Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the annual meeting of the Association for Computational Linguistics (ACL). Association for Computational Linguistics, USA. 311–318.
    [39]
    F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12 (2011), 2825–2830.
    [40]
    Chanathip Pornprasit and Chakkrit Tantithamthavorn. 2021. JITLine: A Simpler, Better, Faster, Finer-grained Just-In-Time Defect Prediction. arXiv preprint arXiv:2103.07068, 369–379.
    [41]
    Chanathip Pornprasit and Chakkrit Tantithamthavorn. 2022. DeepLineDP: Towards a Deep Learning Approach for Line-Level Defect Prediction. IEEE Transactions on Software Engineering, 1–1.
    [42]
    Chanathip Pornprasit, Chakkrit Tantithamthavorn, Jirayus Jiarpakdee, Michael Fu, and Patanamon Thongtanunam. 2021. PyExplainer: Explaining the Predictions of Just-In-Time Defect Models. In proceedings of the IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, San Francisco, CA, USA. 407–418.
    [43]
    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 21, 1 (2019), 67 pages.
    [44]
    Musfiqur Rahman, Dharani Palani, and Peter C Rigby. 2019. Natural software revisited. In Proceedings of the IEEE/ACM International Conference on Software Engineering (ICSE). IEEE, San Francisco, CA, USA. 37–48.
    [45]
    Dilini Rajapaksha, Chakkrit Tantithamthavorn, Christoph Bergmeir, Wray Buntine, Jirayus Jiarpakdee, and John Grundy. 2021. SQAPlanner: Generating data-informed software quality improvement plans. IEEE Transactions on Software Engineering, 48, 8 (2021), 2814–2835.
    [46]
    Peter C Rigby and Christian Bird. 2013. Convergent Contemporary Software Peer Review Practices. In Proceedings of the European Software Engineering Conference and the International Symposium on the Foundations of Software Engineering (ESEC/FSE). Association for Computing Machinery, New York, NY, USA. 202–212.
    [47]
    Peter C Rigby, Daniel M German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer review on open-source software projects: Parameters, statistical models, and theory. ACM Transactions on Software Engineering and Methodology (TOSEM), 23, 4 (2014), 1–33.
    [48]
    S. Roopak and Tony Thomas. 2014. A Novel Phishing Page Detection Mechanism Using HTML Source Code Comparison and Cosine Similarity. In proceedings of the International Conference on Advances in Computing and Communications (ICACC). 167–170. https://doi.org/10.1109/ICACC.2014.47
    [49]
    Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern Code Review: A Case Study at Google. In Proceedings of ICSE (Companion). IEEE, San Francisco, CA, USA. 181–190.
    [50]
    Jing Kai Siow, Cuiyun Gao, Lingling Fan, Sen Chen, and Yang Liu. 2020. Core: Automating review recommendation for code changes. In proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, San Francisco, CA, USA. 284–295.
    [51]
    Chakkrit Tantithamthavorn, Jirayus Jiarpakdee, and John Grundy. 2021. Actionable Analytics: Stop Telling Me What It Is; Please Tell Me What To Do. IEEE Software, 38, 4 (2021), 115–120.
    [52]
    Chakkrit Tantithamthavorn, Shane McIntosh, Ahmed E Hassan, and Kenichi Matsumoto. 2016. Automated Parameter Optimization of Classification Techniques for Defect Prediction Models. In Proceedings of the 38th International Conference on Software Engineering (ICSE). IEEE, San Francisco, CA, USA. 321–332.
    [53]
    Chakkrit Tantithamthavorn, Shane McIntosh, Ahmed E Hassan, and Kenichi Matsumoto. 2016. An empirical comparison of model validation techniques for defect prediction models. IEEE Transactions on Software Engineering, 43, 1 (2016), 1–18.
    [54]
    Chakkrit Tantithamthavorn, Shane McIntosh, Ahmed E Hassan, and Kenichi Matsumoto. 2017. An Empirical Comparison of Model Validation Techniques for Defect Prediction Models. IEEE Transactions on Software Engineering (TSE), 43, 1 (2017), 1–18.
    [55]
    Patanamon Thongtanunam, Raula Gaikovina Kula, Ana Erika Camargo Cruz, Norihiro Yoshida, and Hajimu Iida. 2014. Improving code review effectiveness through reviewer recommendations. In Proceedings of the International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). Association for Computing Machinery, New York, NY, USA. 119–122.
    [56]
    Patanamon Thongtanunam, Shane McIntosh, Ahmed E. Hassan, and Hajimu Iida. 2015. Investigating Code Review Practices in Defective Files: An Empirical Study of the Qt System. In Proceedings of the IEEE/ACM Working Conference on Mining Software Repositories (MSR). IEEE, San Francisco, CA, USA. 168–179.
    [57]
    Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2018. Review participation in modern code review: An empirical study of the Android, Qt, and OpenStack projects (journal-first abstract). In Proceedings of the International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, San Francisco, CA, USA. 475–475.
    [58]
    Patanamon Thongtanunam, Chanathip Pornprasit, and Chakkrit Tantithamthavorn. 2022. AutoTransform: Automated Code Transformation to Support Modern Code Review Process. 237–248.
    [59]
    Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Raula Gaikovina Kula, Norihiro Yoshida, Hajimu Iida, and Ken-ichi Matsumoto. 2015. Who should review my code? a file location-based code-reviewer recommendation approach for modern code review. In Proceedings of the IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER). IEEE, San Francisco, CA, USA. 141–150.
    [60]
    Michele Tufano, Jevgenija Pantiuchina, Cody Watson, Gabriele Bavota, and Denys Poshyvanyk. 2019. On learning meaningful code changes via neural machine translation. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, San Francisco, CA, USA. 25–36.
    [61]
    Rosalia Tufano, Simone Masiero, Antonio Mastropaolo, Luca Pascarella, Denys Poshyvanyk, and Gabriele Bavota. 2022. Using Pre-Trained Models to Boost Code Review Automation. arXiv preprint arXiv:2201.06850, 2291–2302.
    [62]
    Rosalia Tufano, Luca Pascarella, Michele Tufano, Denys Poshyvanykz, and Gabriele Bavota. 2021. Towards Automating Code Review Activities. In Proceedings of the International Conference on Software Engineering (ICSE). IEEE, San Francisco, CA, USA. 163–174.
    [63]
    Anderson Uchôa, Caio Barbosa, Daniel Coutinho, Willian Oizumi, Wesley KG Assunçao, Silvia Regina Vergilio, Juliana Alves Pereira, Anderson Oliveira, and Alessandro Garcia. 2021. Predicting Design Impactful Changes in Modern Code Review: A Large-Scale Empirical Study. In Proceedings of the IEEE/ACM International Conference on Mining Software Repositories (MSR). IEEE, San Francisco, CA, USA. 471–482.
    [64]
    Supatsara Wattanakriengkrai, Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Hideaki Hata, and Kenichi Matsumoto. 2020. Predicting defective lines using a model-agnostic technique. IEEE Transactions on Software Engineering (TSE), 48, 5 (2020), 1480–1496.
    [65]
    William E Winkler. 1990. String comparator metrics and enhanced decision rules in the Fellegi-Sunter model of record linkage.
    [66]
    Yue Yu, Huaimin Wang, Gang Yin, and Charles X Ling. 2014. Reviewer recommender of pull-requests in GitHub. In Proceedings of IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, San Francisco, CA, USA. 609–612.
    [67]
    Li Yujian and Liu Bo. 2007. A normalized Levenshtein distance metric. IEEE transactions on pattern analysis and machine intelligence, 29, 6 (2007), 1091–1095.
    [68]
    Motahareh Bahrami Zanjani, Huzefa Kagdi, and Christian Bird. 2015. Automatically recommending peer reviewers in modern code review. IEEE Transactions on Software Engineering, 42, 6 (2015), 530–543.
    [69]
    Chunchun Zhao and Sartaj Sahni. 2019. String correction using the Damerau-Levenshtein distance. BMC bioinformatics, 20, 11 (2019), 1–28.

    Cited By

    View all
    • (2024)Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender SystemsSoftware10.3390/software30100043:1(62-80)Online publication date: 29-Feb-2024
    • (2024)AI-Assisted Assessment of Coding Practices in Modern Code ReviewProceedings of the 1st ACM International Conference on AI-Powered Software10.1145/3664646.3665664(85-93)Online publication date: 10-Jul-2024
    • (2024)An Empirical Study on Code Review Activity Prediction and Its Impact in PracticeProceedings of the ACM on Software Engineering10.1145/36608061:FSE(2238-2260)Online publication date: 12-Jul-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ESEC/FSE 2022: Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
    November 2022
    1822 pages
    ISBN:9781450394130
    DOI:10.1145/3540250
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Modern Code Review
    2. Software Quality Assurance

    Qualifiers

    • Research-article

    Funding Sources

    • Australian Research Council

    Conference

    ESEC/FSE '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 112 of 543 submissions, 21%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)255
    • Downloads (Last 6 weeks)18

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender SystemsSoftware10.3390/software30100043:1(62-80)Online publication date: 29-Feb-2024
    • (2024)AI-Assisted Assessment of Coding Practices in Modern Code ReviewProceedings of the 1st ACM International Conference on AI-Powered Software10.1145/3664646.3665664(85-93)Online publication date: 10-Jul-2024
    • (2024)An Empirical Study on Code Review Activity Prediction and Its Impact in PracticeProceedings of the ACM on Software Engineering10.1145/36608061:FSE(2238-2260)Online publication date: 12-Jul-2024
    • (2024)CORE: Resolving Code Quality Issues using LLMsProceedings of the ACM on Software Engineering10.1145/36437621:FSE(789-811)Online publication date: 12-Jul-2024
    • (2024)On the Reliability and Explainability of Language Models for Program GenerationACM Transactions on Software Engineering and Methodology10.1145/364154033:5(1-26)Online publication date: 3-Jun-2024
    • (2024)Vision Transformer Inspired Automated Vulnerability RepairACM Transactions on Software Engineering and Methodology10.1145/363274633:3(1-29)Online publication date: 15-Mar-2024
    • (2024)Code Review Automation: Strengths and Weaknesses of the State of the ArtIEEE Transactions on Software Engineering10.1109/TSE.2023.334817250:2(338-353)Online publication date: 1-Jan-2024
    • (2024)Automating modern code review processes with code similarity measurementInformation and Software Technology10.1016/j.infsof.2024.107490173(107490)Online publication date: Sep-2024
    • (2024)Quantifying and characterizing clones of self-admitted technical debt in build systemsEmpirical Software Engineering10.1007/s10664-024-10449-529:2Online publication date: 26-Feb-2024
    • (2023)On potential improvements in the analysis of the evolution of themes in code review comments2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00059(340-347)Online publication date: 6-Sep-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media