Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3180155.3180217acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

"Was my contribution fairly reviewed?": a framework to study the perception of fairness in modern code reviews

Published: 27 May 2018 Publication History
  • Get Citation Alerts
  • Abstract

    Modern code reviews improve the quality of software products. Although modern code reviews rely heavily on human interactions, little is known regarding whether they are performed fairly. Fairness plays a role in any process where decisions that affect others are made. When a system is perceived to be unfair, it affects negatively the productivity and motivation of its participants. In this paper, using fairness theory we create a framework that describes how fairness affects modern code reviews. To demonstrate its applicability, and the importance of fairness in code reviews, we conducted an empirical study that asked developers of a large industrial open source ecosystem (OpenStack) what their perceptions are regarding fairness in their code reviewing process. Our study shows that, in general, the code review process in OpenStack is perceived as fair; however, a significant portion of respondents perceive it as unfair. We also show that the variability in the way they prioritize code reviews signals a lack of consistency and the existence of bias (potentially increasing the perception of unfairness). The contributions of this paper are: (1) we propose a framework---based on fairness theory---for studying and managing social behaviour in modern code reviews, (2) we provide support for the framework through the results of a case study on a large industrial-backed open source project, (3) we present evidence that fairness is an issue in the code review process of a large open source ecosystem, and, (4) we present a set of guidelines for practitioners to address unfairness in modern code reviews.

    References

    [1]
    Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proc. of the 35th Intl. Conf. on Software Engineering (ICSE '13). IEEE, 712--721.
    [2]
    Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In Proc. of the 2013 Intl. Conf. on Software Engineering. IEEE, 931--940.
    [3]
    Victor R Basili, Forrest Shull, and Filippo Lanubile. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering 25, 4 (1999), 456--473.
    [4]
    Gabriele Bavota and Barbara Russo. 2015. Four eyes are better than two: On the impact of code reviews on software quality. In Software Maintenance and Evolution (ICSME), 2015 IEEE Intl. Conf. on. IEEE, 81--90.
    [5]
    Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W. Godfrey. 2013. The influence of non-technical factors on code review. In Proc. of the 20th Intl. Working Conf. on Reverse Engineering (WCRE '13). 122--131.
    [6]
    Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2016. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering 21, 3 (2016), 932--959.
    [7]
    Andrew Begel and Thomas Zimmermann. 2014. Analyze this! 145 questions for data scientists in software engineering. In Proc. of the 36th Intl. Conf. on Software Engineering. ACM, 12--23.
    [8]
    Robert J. Bies and Joseph S. Moag. 1986. Interactional justice: Communication criteria of fairness. In Research on Negotiation in Organizations, R.J. Lewicki, B.H. Sheppard, and M.H. Bazerman (Eds.). JAI Press, 43--55.
    [9]
    Robert J. Bies and Debra L. Shapiro. 1987. Interactional fairness judgments: The influence of causal accounts. Social Justice Research 1, 2 (01 jun 1987), 199--218.
    [10]
    Amiangshu Bosu and Jeffrey C Carver. 2014. Impact of developer reputation on code review outcomes in OSS projects: An empirical investigation. In Proc. of the 8th ACM/IEEE Intl. Symp. on Empirical Software Engineering and Measurement. ACM, 33.
    [11]
    Amiangshu Bosu, Michaela Greiler, and Christian Bird. 2015. Characteristics of Useful Code Reviews: An Empirical Study at Microsoft. In Proc. of the 12th Intl. Working Conf. on Mining Software Repositories (MSR '15). 146--156.
    [12]
    Jon Brodkin. 2013. Linus Torvalds defends his right to shame Linux kernel developers. Ars Technica. (July 2013).
    [13]
    Mauro Carvalho Chehab. 2016. Code of Conflict. Online. (2016). https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/code-of-conflict.rst?h=v4.13-rc6 Visited 2017-08-23.
    [14]
    Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin 70, 4 (1968), 213.
    [15]
    Ronald L Cohen. 1987. Distributive justice: Theory and research. Social Justice Research1, 1 (1987), 19--40.
    [16]
    Jason A Colquitt. 2001. On the dimensionality of organizational justice: a construct validation of a measure. Journal of applied psychology86, 3 (2001), 386.
    [17]
    Jason A. Colquitt and Jerome M. Chertkoff. 2002. Explaining Injustice: The Interactive Effect of Explanation and Outcome on Fairness Perceptions and Task Motivation. Journal of Management 28, 5 (2002), 591--610.
    [18]
    Jason A. Colquitt, Donald E. Conlon, Michael J. Wesson, Christopher O. L. H. Porter, and K. Yee Ng. 2001. Justice at the Millenium: A Meta-Analytic Review of 25 Years of Organizational Justice Research. Journal of Applied Psychology 86, 3 (2001), 425--445.
    [19]
    Jacek Czerwonka, Michaela Greiler, and Jack Tilford. 2015. Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down. In Proc. of the 37th Intl. Conf. on Software Engineering (ICSE '15). IEEE, Piscataway, NJ, USA, 27--28.
    [20]
    Laura Dabbish, Colleen Stuart, Jason Tsay, and Jim Herbsleb. 2012. Social coding in GitHub: transparency and collaboration in an open software repository. In Proc. of the ACM Conf. on Computer Supported Cooperative Work (CSCW '12). ACM, 1277--1286.
    [21]
    Amy Edmondson. 1999. Psychological safety and learning behavior in work teams. Administrative science quarterly 44, 2 (1999), 350--383.
    [22]
    Cynthia D. Fisher. 2000. Mood and Emotions while Working: Missing Pieces of Job Satisfaction? Research in Organizational Behavior 21, 2 (2000), 1850--202.
    [23]
    Bent Flyvbjerg. 2006. Five misunderstandings about case-study research. Qualitative inquiry 12, 2 (2006), 219--245.
    [24]
    Robert Folger and Russell Cropanzano. 1998. Toward a General Theory of Fairness. SAGE, 173--196.
    [25]
    Robert Folger and Russell Cropanzano. 2001. Fairness Theory: Justice as Accoutability. Stanford University Press, 1--55.
    [26]
    Robert Folger, David Rosenfield, Janet Grove, and Louise Corkran. 1979. Effects of 'voice' and peer opinions on responses to inequity. Journal of Personality and Social Psychology 37, 12 (1979), 2253--2261.
    [27]
    Daviti Gachechiladze, Filippo Lanubile, Nicole Novielli, and Alexander Serebrenik. 2017. Anger and its direction in collaborative software development. In Proc. of the 39th Intl. Conf. on Software Engineering: New Ideas and Emerging Results Track. IEEE, 11--14.
    [28]
    Jerald Greenberg. 1987. A taxonomy of organizational justice theories. Academy of Management review 12, 1 (1987), 9--22.
    [29]
    Steven L. Grover. 2014. Unraveling respect in organization studies. Human Relations 67, 1 (2014), 27--51.
    [30]
    Kazuki Hamasaki, Raula Gaikovina Kula, Norihiro Yoshida, A. E. Camargo Cruz, Kenji Fujiwara, and Hajimu Iida. 2013. Who does what during a code review? Datasets of OSS peer review repositories. In Proc. of the 10th Intl. Working Conf. on Mining Software Repositories (MSR' 13). IEEE, 49--52.
    [31]
    Il-Horn Hann, Jeffrey A Roberts, and Sandra A Slaughter. 2013. All are not equal: An examination of the economic returns to different forms of participation in open source software communities. Information Systems Research 24, 3 (2013), 520--538.
    [32]
    Bryan W. Husted and Robert Folger. 2004. Fairness and Transaction Costs: The Contribution of Organizational Justice Theory to an Integrative Model of Economic Organization. Organization Scienc 15, 6 (2004), 719--729.
    [33]
    Daniel Izquierdo-Cortazar, Lars Kurth, Jesus M Gonzalez-Barahona, Santiago Dueñas, and Nelson Sekitoleko. 2016. Characterization of the Xen project code review process: an experience report. In Proc. of the 13th Intl. Conf. on Mining Software Repositories. ACM, 386--390.
    [34]
    Daniel Izquierdo-Cortazar, Nelson Sekitoleko, Jesus M Gonzalez-Barahona, and Lars Kurth. 2017. Using Metrics to Track Code Review Performance. In Proc. of the 21st Intl. Conf. on Evaluation and Assessment in Software Engineering. ACM, 214--223.
    [35]
    Joab Jackson. 2017. Node.js Forked Again Over Complaints of Unresponsive Leadership. The News Stack https://thenewstack.io/node-js-forked-complaints-repeated-harassment/. (Aug 2017).
    [36]
    Yujuan Jiang, Bram Adams, and Daniel M. German. 2013. Will My Patch Make It? And How Fast? Case Study on the Linux Kernel. In Proc. of the 10th Intl. Working Conf. on Mining Software Repositories (MSR '13). 101--110.
    [37]
    Sean Michael Kerner. 2016. OpenStack Revenues Approaching $3.4B: 451 Research. Online. (2016). http://www.eweek.com/cloud/openstack-revenues-approaching-3.4b-451-research Visited2017-08-23.
    [38]
    Oleksii Kononenko, Olga Baysal, Latifa Guerrouj, Yaxin Cao, and Michael W Godfrey. 2015. Investigating code review quality: Do people and participation matter?. In Software Maintenance and Evolution (ICSME), 2015 IEEE Intl. Conf. on. IEEE, 111--120.
    [39]
    Adam Kuper and Jessica Kuper. 1985. The Social Science Encyclopedia. Routledge.
    [40]
    Amanda Lee, Jeffrey C Carver, and Amiangshu Bosu. 2017. Understanding the impressions, motivations, and barriers of one time code contributors to FLOSS projects: a survey. In Proc. of the 39th Intl. Conf. on Software Engineering. IEEE, 187--197.
    [41]
    Timothy C Lethbridge, Susan Elliott Sim, and Janice Singer. 2005. Studying software engineers: Data collection techniques for software field studies. Empirical software engineering 10, 3 (2005), 311--341.
    [42]
    Gerald S Leventhal. 1976. The distribution of rewards and resources in groups and organizations. Advances in experimental social psychology 9 (1976), 91--131.
    [43]
    Gerald S Leventhal. 1980. What should be done with equity theory? Springer.
    [44]
    Susan A. Lynham. 2002. The General Method of Theory-Building Research in Applied Disciplines. Advances in Developing Human Resources 4, 3 (2002), 221--241.
    [45]
    Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E. Hassan. 2015. An Empirical Study of the Impact of Modern Code Review Practices on Software Quality. Empirical Software Engineering (EMSE) 21, 5 (2015), 1--44.
    [46]
    Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2016. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering 21, 5 (2016), 2146--2189.
    [47]
    Daniel Pletea, Bogdan Vasilescu, and Alexander Serebrenik. 2014. Security and Emotion: Sentiment Analysis of Security Discussions on GitHub. In Proc. of the 11th Working Conf. on Mining Software Repositories (MSR 2014). 348--351.
    [48]
    Teade Punter, Marcus Ciolkowski, Bernd Freimut, and Isabel John. 2003. Conducting on-line surveys in software engineering. In Empirical Software Engineering, 2003. ISESE 2003. Proc. 2003 Intl. Symp. on. IEEE, 80--88.
    [49]
    Uzma Raja and Marietta J Tretter. 2012. Defining and evaluating a measure of Open Source project survivability. IEEE Transactions on Software Engineering 38, 1 (2012), 163--174.
    [50]
    Peter C. Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proc. of the 9th Joint Meeting on Foundations of Software Engineering (FSE '13). ACM, 202--212.
    [51]
    Peter C. Rigby, Daniel M. German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer Review on Open-Source Software Projects: Parameters, Statistical Models, and Theory. Transactions on Software Engineering Methodologies 23, 4, Article 35 (Sept. 2014), 33 pages.
    [52]
    Peter C. Rigby, Daniel M German, and Margaret-Anne Storey. 2008. Open source software peer review practices: a case study of the apache server. In Proc. of the 30th Intl. Conf. on Software engineering. ACM, 541--550.
    [53]
    Per Runeson and Martin Höst. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering 14, 2 (2009), 131.
    [54]
    Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case Study Research in Software Engineering: Guidelines and Examples. Wiley Blackwell, Hoboken, New Jersey, USA. 256 pages.
    [55]
    Daniel Schneider, Scott Spurlock, and Megan Squire. 2016. Differentiating Communication Styles of Leaders on the Linux Kernel Mailing List. In Proc. of the 12th Intl. Symp. on Open Collaboration (OpenSym '16). ACM, New York, NY, USA, Article 2, 10 pages.
    [56]
    Carolyn B. Seaman. 1999. Qualitative methods in empirical studies of software engineering. IEEE Transactions on software engineering 25, 4 (1999), 557--572.
    [57]
    Sarah Sharp. 2015. Closing a door. Online. (2015). http://sarah.thesharps.us/2015/10/05/closing-a-door/ Visited 2017-08-23.
    [58]
    Janice Singer, Susan E. Sim, and Timothy C. Lethbridge. 2008. Software Engineering Data Collection for Field Studies. In Guide to Advanced Empirical Software Engineering. Springer, London, UK, 9--34.
    [59]
    Megan Squire and Rebecca Gazda. 2015. FLOSS as a Source for Profanity and Insults: Collecting the Data. In Proc. of the 48th Hawaii Intl. Conf. on System Sciences, Vol. HICSS '15. IEEE, 5290--5298.
    [60]
    Igor Steinmacher, Marco Aurelio Graciotto Silva, Marco Aurelio Gerosa, and David F Redmiles. 2015. A systematic literature review on the barriers faced by newcomers to open source software projects. Information and Software Technology 59 (2015), 67--85.
    [61]
    Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How do software engineers understand code changes?: an exploratory study in industry. In Proc. of the ACM SIGSOFT 20th Intl. Symp. on the Foundations of Software Engineering. ACM, 51.
    [62]
    The OpenStack Foundation. 2012. Companies Supporting The OpenStack Foundation. Online. (2012). https://www.openstack.org/foundation/companies/ Visited 2017-08-23.
    [63]
    The OpenStack Foundation. 2012. The OpenStack Foundation. Online. (2012). https://www.openstack.org/foundation/ Visited 2017-08-23.
    [64]
    The OpenStack Foundation. 2017. OpenStack community contribution in all releases I Lines of code. Online. (2017). http://stackalytics.com/?release=all&metric=loc Visited 2017-08-23.
    [65]
    The OpenStack Foundation. 2017. OpenStack com=munity contribution in all releases I Reviews. Online. (2017). http://stackalytics.com/?release=all&metric=marks Visited 2017-08-23.
    [66]
    John W Thibaut and Laurens Walker. 1975. Procedural justice: A psychological analysis. L. Erlbaum Associates.
    [67]
    Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2016. Revisiting code ownership and its relationship with software quality in the scope of modern code review. In Proc. of the 38th Intl. Conf. on Software Engineering (ICSE '16). IEEE, 1039--1050.
    [68]
    Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2017. Review participation in modern code review. Empirical Software Engineering 22, 2 (2017), 768--817.
    [69]
    Parastou Tourani and Bram Adams. 2016. The Impact of Human Discussions on Just-In-Time Quality Assurance. In Proc. of the 23rd Intl. Conf. on Software Analysis, Evolution, and Reengineering (SANER '16). 189--200.
    [70]
    Parastou Tourani, Bram Adams, and Alexander Serebrenik. 2017. Code of Conduct in Open Source Projects. In Proc. of the 24th Intl. Conf. on Software Analysis, Evolution, and Reengineering (SANER '17). IEEE, 24--33.
    [71]
    Jason Tsay, Laura Dabbish, and James Herbsleb. 2014. Let's talk about it: evaluating contributions through discussion in GitHub. In Proc. of the 22nd Intl. Symp. on Foundations of Software Engineering (FSE '14). ACM, 144--154.
    [72]
    Tom R. Tyler. 1994. Psychological models of the justice motive: Antecedents of distributive and procedural justice. Journal of Personality and Social Psychology 67, 5 (1994), 850 -- 863.
    [73]
    Georg Von Krogh, Sebastian Spaeth, and Karim R Lakhani. 2003. Community, joining, and specialization in open source software innovation: a case study. Research Policy 32, 7 (2003), 1217--1241.
    [74]
    Jing Wang, Patrick C Shih, Yu Wu, and John M Carroll. 2015. Comparative case studies of open source software peer review practices. Information and Software Technology 67 (2015), 1--12.
    [75]
    Howard M. Weiss and Russell Cropanzano. 1996. Affective Events Theory: A theoretical discussion of the structure, causes and consequences of affective experiences at work. Vol. 18. Elsevier, 1--74.
    [76]
    Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wessén. 2012. Experimentation in software engineering. Springer.
    [77]
    Xin Xia, David Lo, Xinyu Wang, and Xiaohu Yang. 2015. Who should review this change?: Putting text and file location analyses together for more accurate recommendations. In Software Maintenance and Evolution (ICSME), 2015 IEEE Intl. Conf. on. 261--270.
    [78]
    Xin Yang, Raula Gaikovina Kula, Norihiro Yoshida, and Hajimu Iida. 2016. Peer Review Social Network (PeRSoN) in Open Source Projects. IEICE Transactions on Information and Systems E99-D, 3 (2016), 661--670.
    [79]
    Motahareh Bahrami Zanjani, Huzefa Kagdi, and Christian Bird. 2016. Automatically recommending peer reviewers in modern code review. IEEE Transactions on Software Engineering 42, 6 (2016), 530--543.

    Cited By

    View all
    • (2024)Interpretable White-Box Fairness Testing Through Biased Neuron IdentificationAttacks, Defenses and Testing for Deep Learning10.1007/978-981-97-0425-5_19(357-382)Online publication date: 4-Jun-2024
    • (2023)OSCMS: A Decentralized Open-Source Coordination Management System Using a Novel Triple-Blockchain ArchitectureApplied Sciences10.3390/app1311658013:11(6580)Online publication date: 29-May-2023
    • (2023)Modern Code Reviews—Survey of Literature and PracticeACM Transactions on Software Engineering and Methodology10.1145/358500432:4(1-61)Online publication date: 26-May-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICSE '18: Proceedings of the 40th International Conference on Software Engineering
    May 2018
    1307 pages
    ISBN:9781450356381
    DOI:10.1145/3180155
    • Conference Chair:
    • Michel Chaudron,
    • General Chair:
    • Ivica Crnkovic,
    • Program Chairs:
    • Marsha Chechik,
    • Mark Harman
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 May 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. fairness
    2. human and social aspects
    3. modern code review
    4. open source software
    5. software development
    6. transparency

    Qualifiers

    • Research-article

    Conference

    ICSE '18
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 276 of 1,856 submissions, 15%

    Upcoming Conference

    ICSE 2025

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)104
    • Downloads (Last 6 weeks)8

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Interpretable White-Box Fairness Testing Through Biased Neuron IdentificationAttacks, Defenses and Testing for Deep Learning10.1007/978-981-97-0425-5_19(357-382)Online publication date: 4-Jun-2024
    • (2023)OSCMS: A Decentralized Open-Source Coordination Management System Using a Novel Triple-Blockchain ArchitectureApplied Sciences10.3390/app1311658013:11(6580)Online publication date: 29-May-2023
    • (2023)Modern Code Reviews—Survey of Literature and PracticeACM Transactions on Software Engineering and Methodology10.1145/358500432:4(1-61)Online publication date: 26-May-2023
    • (2023)On potential improvements in the analysis of the evolution of themes in code review comments2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00059(340-347)Online publication date: 6-Sep-2023
    • (2022)What's bothering developers in code review?Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice10.1145/3510457.3513083(341-342)Online publication date: 21-May-2022
    • (2022)An industrial experience report on retro-inspectionProceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice10.1145/3510457.3513055(43-52)Online publication date: 21-May-2022
    • (2022)How does code reviewing feedback evolve?Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice10.1145/3510457.3513039(151-160)Online publication date: 21-May-2022
    • (2022)NeuronFairProceedings of the 44th International Conference on Software Engineering10.1145/3510003.3510123(1519-1531)Online publication date: 21-May-2022
    • (2022)What's bothering developers in code review?2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)10.1109/ICSE-SEIP55303.2022.9793977(341-342)Online publication date: May-2022
    • (2021)Review Dynamics and Their Impact on Software QualityIEEE Transactions on Software Engineering10.1109/TSE.2020.296466047:12(2698-2712)Online publication date: 1-Dec-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media