Abstract
Modern Code Review (MCR) is a practice software engineers adopt to improve code quality. Despite its well-known benefits, it involves non-negligible effort, and this has led to various studies that provide insightful information from MCR data or support the review activity. Even though some studies proposed taxonomies of MCR feedback, they either are course-grained or focus on particular technologies, typically Java. Besides, existing studies have focused on classifying the concerns raised by reviewers or the review changes triggered during MCR separately. In turn, we present a jointly in-depth qualitative study of code-level issues found and fixed during the code review process in TypeScript projects, a popular language among practitioners in recent years. We extracted and manually classified 569 review threads of four open-source projects on GitHub: Angular, Kibana, React Native, and VS Code. Our key contribution is a comprehensive and fine-grained classification of aspects discussed during MCR, categorized into four main groups: topic, review target, issue, and code fix. We also present an analysis of the actual outcomes of MCR in the projects and discuss the potential research directions.
Similar content being viewed by others
Data Availability
The research dataset, codebook from qualitative analysis, additional analysis, and scripts used in this paper are available at https://doi.org/10.5281/zenodo.11357931.
References
Ain QU, Butt WH, Anwar MW, Azam F, Maqbool B (2019) A systematic review on code clone detection. IEEE Access 7:86121–86144. https://doi.org/10.1109/ACCESS.2019.2918202
Akın FK (2023) Awesome chatgpt prompts. https://github.com/f/awesome-chatgpt-prompts
Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: 35th ICSE, pp 712–721. https://doi.org/10.1109/ICSE.2013.6606617. iSSN: 0270-5257
Badampudi D, Unterkalmsteiner M, Britto R (2023) Modern Code Reviews - A Survey of Literature and Practice. ACM Trans Softw Eng Methodol. https://doi.org/10.1145/3585004
Baltes S, Ralph P (2022) Sampling in software engineering research: A critical review and guidelines. Empir Softw Eng 27(4):1–31
Baum T, Liskin O, Niklas K, Schneider K (2016) Factors Influencing Code Review Processes in Industry. In: 24th International symposium on foundations of software engineering, ACM, New York, NY, USA, pp 85–96. https://doi.org/10.1145/2950290.2950323
Beller M, Bacchelli A, Zaidman A, Juergens E (2014) Modern code reviews in open-source projects: Which problems do they fix? In: 11th MSR, ACM, New York, NY, USA, pp 202-211. https://doi.org/10.1145/2597073.2597082
Bird C, Ford D, Zimmermann T, Forsgren N, Kalliamvakou E, Lowdermilk T, Gazit I (2022) Taking flight with copilot: Early insights and opportunities of ai-powered pair-programming tools. Queue 20(6):35–57
Bosu A, Carver JC, Bird C, Orbeck J, Chockley C (2017) Process Aspects and Social Dynamics of Contemporary Code Review: Insights from Open Source Development and Industrial Practice at Microsoft. IEEE Trans Softw Eng 43(1):56–75. https://doi.org/10.1109/TSE.2016.2576451http://dx.doi.org/10.13039/100000001 - US National Science Foundation
Davila N, Melegati J, Wiese I (2024) Tales from the trenches: Expectations and challenges from practice for code review in the generative ai era. IEEE Softw 41(6):38–4. https://doi.org/10.1109/MS.2024.3428439
Davila N, Nunes I (2021) A systematic literature review and taxonomy of modern code review. J Syst Softw 177:110951.https://doi.org/10.1016/j.jss.2021.110951https://www.sciencedirect.com/science/article/pii/S0164121221000480
Davila N, Nunes I, Wiese I (2023) Supplemental materials to “a fine-grained taxonomy of code review feedback in typescript projects”. https://doi.org/10.5281/zenodo.11357931
Dong L, Zhang H, Yang L, Weng Z, Yang X, Zhou X, Pan Z (2021) Survey on Pains and Best Practices of Code Review. In: 2021 28th Asia-pacific software engineering conference (APSEC), pp 482–491. https://doi.org/10.1109/APSEC53868.2021.00055. iSSN: 2640-0715
Ebert C, Louridas P (2023) Generative ai for software practitioners. IEEE Softw 40(4):30–3. https://doi.org/10.1109/MS.2023.3265877
Ebert F, Castor F, Novielli N, Serebrenik A (2021) An exploratory study on confusion in code reviews. Empir Softw Eng 26:1–48
Fregnan E, Petrulio F, Di Geronimo L, Bacchelli A (2022) What happens in my code reviews? an investigation on automatically classifying review changes. Empir Softw Eng 27(4). https://doi.org/10.1007/s10664-021-10075-5
Gibbs GR (2007) Thematic coding and categorizing. In: Analyzing qualitative data, SAGE Publications Ltd., London
GitHub (2023) Octoverse: the state of open source and rise of ai in 2023. https://github.blog/2023-11-08-the-state-of-open-source-and-ai/
Gunawardena S, Tempero E, Blincoe K (2023) Concerns identified in code review: A fine-grained, faceted classification. Information and Software Technology 153:107054. https://doi.org/10.1016/j.infsof.2022.107054https://www.sciencedirect.com/science/article/pii/S0950584922001653
Guo B, Kwon YW, Song M (2019) Decomposing composite changes for code review and regression test selection in evolving software. J Comput Sci Technol 34(2):416–436. https://doi.org/10.1007/s11390-019-1917-9
Guo Q, Cao J, Xie X, Liu S, Li X, Chen B, Peng X (2023) Exploring the potential of chatgpt in automated code refinement: an empirical study. arXiv:2309.08221
Han X, Tahir A, Liang P, Counsell S, Blincoe K, Li B, Luo Y (2022) Code smells detection via modern code review: a study of the openstack and qt communities. Empir Softw Eng 27(6):127
Hong Y, Tantithamthavorn C, Thongtanunam P, Aleti A (2022) Commentfinder: A simpler, faster, more accurate code review comments recommendation. In: 30th ESEC/FSE, ACM, New York, NY, USA, pp 507-519. https://doi.org/10.1145/3540250.3549119
Hu X, Xia X, Lo D, Wan Z, Chen Q, Zimmermann T (2022) Practitioners’ expectations on automated code comment generation. In: 44th ICSE, pp 1693–1705
Imai S (2022) Is github copilot a substitute for human pair-programming? an empirical study. In: 2022 IEEE/ACM 44th international conference on software engineering: companion proceedings (ICSE-Companion), pp 319–321. https://doi.org/10.1145/3510454.3522684
Inbal Shani GS (2023) Survey reveals ai’s impact on the developer experience. https://github.blog/2023-06-13-survey-reveals-ais-impact-on-the-developer-experience/
Jiang J, Lv J, Zheng J, Zhang L (2021) How developers modify pull requests in code review. IEEE Trans Reliab 1–15. https://doi.org/10.1109/TR.2021.3093159
Li ZX, Yu Y, Yin G, Wang T, Wang HM (2017) What are they talking about? analyzing code reviews in pull-based development model. J Comput Sci Technol 32(6):1060–1075. https://doi.org/10.1007/s11390-017-1783-2
Liang JT, Yang C, Myers BA (2023) A large-scale survey on the usability of ai programming assistants: successes and challenges. arXiv:2303.17125
Li G, Liu H, Nyamawe AS (2020) A survey on renamings of software entities. ACM Comput Surv 53(2). https://doi.org/10.1145/3379443
MacLeod L, Greiler M, Storey M, Bird C, Czerwonka J (2018) Code reviewing in the trenches: challenges and best practices. IEEE Softw 35(4):34–42. https://doi.org/10.1109/MS.2017.265100500
MacLeod L, Greiler M, Storey MA, Bird C, Czerwonka J (2018b) Code Reviewing in the Trenches: challenges and Best Practices. IEEE Softw 35(4):34–42. https://doi.org/10.1109/MS.2017.265100500https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7950877
Mäntylä MV, Lassenius C (2008) What types of defects are really discovered in code reviews? IEEE Trans Softw Eng 35(3):430–448
McDonald N, Schoenebeck S, Forte A (2019) Reliability and inter-rater reliability in qualitative research: Norms and guidelines for cscw and hci practice. Proc ACM Hum-Comput Interact 3(CSCW). https://doi.org/10.1145/3359174
Mehrpour S, LaToza TD (2023) Can static analysis tools find more defects? a qualitative study of design rule violations found by code review. Empir Softw Eng 28(1):5
Moradi Dakhel A, Majdinasab V, Nikanjam A, Khomh F, Desmarais MC, Jiang ZM (2023) Github copilot ai pair programmer: asset or liability?. J Syst Softw 111734. https://doi.org/10.1016/j.jss.2023.111734https://www.sciencedirect.com/science/article/pii/S0164121223001292
Nguyen N, Nadi S (2022) An empirical evaluation of github copilot’s code suggestions. In: 19th MSR, pp 1–5. https://doi.org/10.1145/3524842.3528470
Panichella S, Zaugg N (2020) An empirical investigation of relevant changes and automation needs in modern code review. Empir Softw Eng 25(6):4833–4872. https://doi.org/10.1007/s10664-020-09870-3
Pascarella L, Spadini D, Palomba F, Bruntink M, Bacchelli A (2018) Information needs in contemporary code review. Proc ACM Hum-Comput Interact 2(CSCW). https://doi.org/10.1145/3274404
Sadowski C, Söderberg E, Church L, Sipko M, Bacchelli A (2018) Modern code review: a case study at google. In: 40th ICSE: software engineering in practice, ACM, New York, NY, USA, pp 181–190. https://doi.org/10.1145/3183519.3183525
Siy H, Votta L (2001) Does the modern code inspection have value? In: Proceedings IEEE international conference on software maintenance. ICSM 2001, IEEE, pp 281–289
Spadini D, Aniche M, Storey MA, Bruntink M, Bacchelli A (2018) When testing meets code review: why and how developers review tests. In: 40th ICSE, ACM, New York, NY, USA, pp 677–687. https://doi.org/10.1145/3180155.3180192
Stack Overflow (2023) Stack overflow developer survey 2023. https://survey.stackoverflow.co/2023
Thongtanunam P, McIntosh S, Hassan AE, Iida H (2015) Investigating code review practices in defective files: an empirical study of the qt system. In: 12th MSR, IEEE Press, pp 168–179. https://doi.org/10.1109/MSR.2015.23
Tufano R, Masiero S, Mastropaolo A, Pascarella L, Poshyvanyk D, Bavota G (2022) Using pre-trained models to boost code review automation. In: 44th ICSE, ACM, New York, NY, USA, pp 2291-2302. https://doi.org/10.1145/3510003.3510621
Tufano R, Pascarella L, Tufano M, Poshyvanyk D, Bavota G (2021) Towards automating code review activities. In: 43rd ICSE, pp 163–174.https://doi.org/10.1109/ICSE43902.2021.00027https://ieeexplore.ieee.org/document/9402025
Wang M, Lin Z, Zou Y, Xie B (2019) CoRA: decomposing and describing tangled code changes for reviewer. In: 2019 34th IEEE/ACM international conference on automated software engineering (ASE), pp 1050–1061. https://doi.org/10.1109/ASE.2019.00101
Yang C, Zhang X, Zeng L, Fan Q, Yin G, Wang H (2017) An empirical study of reviewer recommendation in pull-based development model. In: 9th Asia-pacific symposium on internetware, ACM, New York, NY, USA, Internetware’17, pp 14:1–14:6. https://doi.org/10.1145/3131704.3131718
Yetiştiren B, Özsoy I, Ayerdem M, Tüzün E (2023) Evaluating the code quality of ai-assisted code generation tools: an empirical study on github copilot, amazon codewhisperer, and chatgpt. arXiv:2304.10778
Zampetti F, Mudbhari S, Arnaoudova V, Di Penta M, Panichella S, Antoniol G (2022) Using code reviews to automatically configure static analysis tools. Empir Softw Eng 27(1):28
Zanaty FE, Hirao T, McIntosh S, Ihara A, Matsumoto K (2018) An empirical study of design discussions in code review. In: 12th ESEM, ACM, New York, NY, USA. https://doi.org/10.1145/3239235.3239525
Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. Nicole Davila would like to thank CAPES for the research grant ref. 88887.480572/2020-00. Igor Wiese, thanks CNPq/MCTI/FNDCT #408812/2021-4, and MCTIC/CGI/FAPESP #2021/06662-1.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of Interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by: Jeffrey C. Carver.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Davila, N., Nunes, I. & Wiese, I. A fine-grained taxonomy of code review feedback in TypeScript projects. Empir Software Eng 30, 53 (2025). https://doi.org/10.1007/s10664-024-10604-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s10664-024-10604-y