Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Can GPT-4 Replicate Empirical Software Engineering Research?

Published: 12 July 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Empirical software engineering research on production systems has brought forth a better understanding of the software engineering process for practitioners and researchers alike. However, only a small subset of production systems is studied, limiting the impact of this research. While software engineering practitioners could benefit from replicating research on their own data, this poses its own set of challenges, since performing replications requires a deep understanding of research methodologies and subtle nuances in software engineering data. Given that large language models (LLMs), such as GPT-4, show promise in tackling both software engineering- and science-related tasks, these models could help replicate and thus democratize empirical software engineering research. In this paper, we examine GPT-4’s abilities to perform replications of empirical software engineering research on new data. We specifically study their ability to surface assumptions made in empirical software engineering research methodologies, as well as their ability to plan and generate code for analysis pipelines on seven empirical software engineering papers. We perform a user study with 14 participants with software engineering research expertise, who evaluate GPT-4-generated assumptions and analysis plans (i.e., a list of module specifications) from the papers. We find that GPT-4 is able to surface correct assumptions, but struggles to generate ones that apply common knowledge about software engineering data. In a manual analysis of the generated code, we find that the GPT-4-generated code contains correct high-level logic, given a subset of the methodology. However, the code contains many small implementation-level errors, reflecting a lack of software engineering knowledge. Our findings have implications for leveraging LLMs for software engineering research as well as practitioner data scientists in software teams.

    References

    [1]
    2023. ChatGPT Plugins. Retrieved September 25, 2023 from https://openai.com/blog/chatgpt-plugins#code-interpreter
    [2]
    2023. Standards | Empirical Standards. Retrieved September 25, 2023 from https://sigsoft.org/EmpiricalStandards/docs/?standard=Replication
    [3]
    Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, and Zhifeng Chen. 2023. PaLM 2 Technical Report. arXiv preprint arXiv:2305.10403.
    [4]
    Sören Auer, Dante AC Barone, Cassiano Bartz, Eduardo G Cortes, Mohamad Yaser Jaradeh, Oliver Karras, Manolis Koubarakis, Dmitry Mouromtsev, Dmitrii Pliukhin, and Daniil Radyush. 2023. The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge. Scientific Reports, 13, 1 (2023), 7240. https://doi.org/10.1038/s41598-023-33607-z
    [5]
    Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts. In Annual Meeting of the Association for Computational Linguistics (ACL): System Demonstrations. Association for Computational Linguistics, 93–104. https://doi.org/10.18653/v1/2022.acl-demo.9
    [6]
    Sebastian Baltes and Stephan Diehl. 2018. Towards a Theory of Software Development Expertise. In ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). 187–200. https://doi.org/10.1145/3236024.3236061
    [7]
    Shraddha Barke, Michael B James, and Nadia Polikarpova. 2023. Grounded Copilot: How Programmers Interact with Code-generating Models. Proceedings of the ACM on Programming Languages, 7, OOPSLA1 (2023), 85–111. https://doi.org/10.1145/3586030
    [8]
    Andrew Begel and Thomas Zimmermann. 2014. Analyze This! 145 Questions for Data Scientists in Software Engineering. In IEEE/ACM International Conference on Software Engineering (ICSE). 12–23. https://doi.org/10.1145/2568225.2568233
    [9]
    Christian Bird, Nachiappan Nagappan, Premkumar Devanbu, Harald Gall, and Brendan Murphy. 2009. Does Distributed Development Affect Software Quality? An Empirical Case Study of Windows Vista. Commun. ACM, 52, 8 (2009), 85–93. https://doi.org/10.1109/ICSE.2009.5070550
    [10]
    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell. 2020. Language Models Are Few-shot Learners. Advances in Neural Information Processing Systems (NeurIPS), 33 (2020), 1877–1901.
    [11]
    J. Carver, J. VanVoorhis, and V. Basili. 2004. Understanding the Impact of Assumptions on Experimental Validity. In International Symposium on Empirical Software Engineering (ISESE). 251–260. https://doi.org/10.1109/ISESE.2004.1334912
    [12]
    Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, and Greg Brockman. 2021. Evaluating Large Language Models Trained on Code. arXiv preprint arXiv:2107.03374.
    [13]
    Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, and Sebastian Gehrmann. 2022. PaLM: Scaling Language Modeling with Pathways. arXiv preprint arXiv:2204.02311.
    [14]
    Andy Cockburn, Pierre Dragicevic, Lonni Besançon, and Carl Gutwin. 2020. Threats of a Replication Crisis in Empirical Computer Science. Commun. ACM, 63, 8 (2020), 70–79. https://doi.org/10.1145/3360311
    [15]
    Robert A DeLine. 2021. Glinda: Supporting Data Science with Live Programming, GUIs and a Domain-specific Language. In ACM CHI Conference on Human Factors in Computing Systems. 1–11. https://doi.org/10.1145/3411764.3445267
    [16]
    Dino Distefano, Manuel Fähndrich, Francesco Logozzo, and Peter W O’Hearn. 2019. Scaling Static Analyses at Facebook. Commun. ACM, 62, 8 (2019), 62–70. https://doi.org/10.1145/3338112
    [17]
    Jon Eyolfson, Lin Tan, and Patrick Lam. 2011. Do Time of Day and Developer Experience Affect Commit Bugginess? In Working Conference on Mining Software Repositories (MSR). 153–162. https://doi.org/10.1145/1985441.1985464
    [18]
    Sakina Fatima, Taher A Ghaleb, and Lionel Briand. 2022. Flakify: A Black-box, Language Model-based Predictor for Flaky Tests. IEEE Transactions on Software Engineering, https://doi.org/10.1109/TSE.2022.3201209
    [19]
    Beat Fluri, Michael Wursch, Martin PInzger, and Harald Gall. 2007. Change Distilling: Tree Differencing for Fine-grained Source Code Change Extraction. IEEE Transactions on Software Engineering, 33, 11 (2007), 725–743. https://doi.org/10.1109/TSE.2007.70731
    [20]
    Bent Flyvbjerg. 2006. Five misunderstandings about case-study research. Qualitative Inquiry, 12, 2 (2006), 219–245. https://doi.org/10.1177/1077800405284363
    [21]
    Denae Ford, Margaret-Anne Storey, Thomas Zimmermann, Christian Bird, Sonia Jaffe, Chandra Maddila, Jenna L Butler, Brian Houck, and Nachiappan Nagappan. 2021. A Tale of Two Cities: Software Developers Working from Home during the Covid-19 Pandemic. ACM Transactions on Software Engineering and Methodology (TOSEM), 31, 2 (2021), 1–37.
    [22]
    Enrico Fregnan, Larissa Braz, Marco D’Ambros, Gül Çalıklı, and Alberto Bacchelli. 2022. First Come First Served: The Impact of File Position on Code Review. In ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). 483–494. https://doi.org/10.1145/3540250.3549177
    [23]
    Fan Gao, Hang Jiang, Moritz Blum, Jinghui Lu, Yuang Jiang, and Irene Li. 2023. Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP Concepts. arXiv preprint arXiv:2308.10410.
    [24]
    Ken Gu, Madeleine Grunde-McLaughlin, Andrew M McNutt, Jeffrey Heer, and Tim Althoff. 2024. How Do Data Analysts Respond to AI Assistance? A Wizard-of-Oz Study.
    [25]
    Philip J Guo, Thomas Zimmermann, Nachiappan Nagappan, and Brendan Murphy. 2010. Characterizing and Predicting which Bugs Get Fixed: An Empirical Study of Microsoft Windows. In ACM/IEEE International Conference on Software Engineering (ICSE). 495–504. https://doi.org/10.1145/1806799.1806871
    [26]
    Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks. In Annual Meeting of the Association for Computational Linguistics (ACL). 8342–8360. https://doi.org/10.18653/v1/2020.acl-main.740
    [27]
    Emitza Guzman, David Azócar, and Yang Li. 2014. Sentiment Analysis of Commit Comments in GitHub: An Empirical Study. In Working Conference on Mining Software Repositories (MSR). 352–355. https://doi.org/10.1145/2597073.2597118
    [28]
    Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating Large Language Models in Generating Synthetic HCI Research Data: A Case Study. In ACM CHI Conference on Human Factors in Computing Systems (CHI). 1–19. https://doi.org/10.1145/3544548.3580688
    [29]
    David Hammer and Leema K Berland. 2014. Confusing Claims for Data: A Critique of Common Practices for Presenting Qualitative Research on Learning. Journal of the Learning Sciences, 23, 1 (2014), 37–46. https://doi.org/10.1080/10508406.2013.802652
    [30]
    Tobias Hey, Jan Keim, Anne Koziolek, and Walter F Tichy. 2020. Norbert: Transfer Learning for Requirements Classification. In IEEE International Requirements Engineering Conference (RE). 169–179. https://doi.org/10.1109/RE48521.2020.00028
    [31]
    Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. 2023. Large Language Models for Software Engineering: A Systematic Literature Review. arXiv preprint arXiv:2308.10620.
    [32]
    Hennie Huijgens, Ayushi Rastogi, Ernst Mulders, Georgios Gousios, and Arie van Deursen. 2020. Questions for Data Scientists in Software Engineering: A Replication. In ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). 568–579. https://doi.org/10.1145/3368089.3409717
    [33]
    Brittany Johnson, Rahul Pandita, Justin Smith, Denae Ford, Sarah Elder, Emerson Murphy-Hill, Sarah Heckman, and Caitlin Sadowski. 2016. A Cross-tool Communication Study on Program Analysis Tool nNtifications. In ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE). 73–84.
    [34]
    Eunice Jun, Maureen Daum, Jared Roesch, Sarah Chasins, Emery Berger, Rene Just, and Katharina Reinecke. 2019. Tea: A High-level Language and Runtime System for Automating Statistical Analysis. In ACM Symposium on User Interface Software and Technology (UIST). 591–603. https://doi.org/10.1145/3332165.3347940
    [35]
    Eunice Jun, Audrey Seo, Jeffrey Heer, and René Just. 2022. Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships. In ACM CHI Conference on Human Factors in Computing Systems (CHI). 1–16. https://doi.org/10.1145/3491102.3501888
    [36]
    Miryung Kim, Dongxiang Cai, and Sunghun Kim. 2011. An Empirical Investigation into the Role of API-level Refactorings during Software Evolution. In IEEE/ACM International Conference on Software Engineering (ICSE). 151–160. https://doi.org/10.1145/1985793.1985815
    [37]
    Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2016. The Emerging Role of Data Scientists on Software Development Teams. In IEEE/ACM International Conference on Software Engineering (ICSE). 96–107. https://doi.org/10.1109/TSE.2017.2754374
    [38]
    Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2017. Data Scientists in Software Teams: State of the Art and Challenges. IEEE Transactions on Software Engineering, 44, 11 (2017), 1024–1038. https://doi.org/10.1109/TSE.2017.2754374
    [39]
    Serkan Kirbas, Etienne Windels, Olayori McBello, Kevin Kells, Matthew Pagano, Rafal Szalanski, Vesna Nowack, Emily Rowan Winter, Steve Counsell, and David Bowes. 2021. On the Introduction of Automatic Program Repair in Bloomberg. IEEE Software, 38, 4 (2021), 43–51. https://doi.org/10.1109/MS.2021.3071086
    [40]
    Barbara A. Kitchenham and Shari Lawrence Pfleeger. 2008. Personal Opinion Surveys. In Guide to Advanced Empirical Software Engineering, Forrest Shull, Janice Singer, and Dag I. K. Sjøberg (Eds.). Springer, 63–92. https://doi.org/10.1007/978-1-84800-044-5_3
    [41]
    Amy J Ko, Thomas D LaToza, and Margaret M Burnett. 2015. A Practical Guide to Controlled Experiments of Software Engineering Tools with Human Participants. Empirical Software Engineering, 20, 1 (2015), 110–141. https://doi.org/10.1007/s10664-013-9279-3
    [42]
    Jenny T Liang, Maryam Arab, Minhyuk Ko, Amy J Ko, and Thomas D LaToza. 2023. A Qualitative Study on the Implementation Design Decisions of Developers. In IEEE/ACM International Conference on Software Engineering (ICSE). 435–447. https://doi.org/10.1109/ICSE48619.2023.00047
    [43]
    Jenny T. Liang, Carmen Badea, Christian Bird, Robert DeLine, Denae Ford, Nicole Forsgren, and Thomas Zimmermann. 2024. Supplemental Materials to "Can Large Language Models Replicate Empirical Software Engineering Research". https://doi.org/10.6084/m9.figshare.24210468
    [44]
    Jenny T Liang, Chenyang Yang, and Brad A Myers. 2024. A large-scale survey on the usability of ai programming assistants: Successes and challenges. In IEEE/ACM International Conference on Software Engineering (ICSE). 1–13. https://doi.org/10.1145/3597503.3608128
    [45]
    Jenny T Liang, Thomas Zimmermann, and Denae Ford. 2022. Understanding Skills for OSS Communities on GitHub. In ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). 170–182. https://doi.org/10.1145/3540250.3549082
    [46]
    Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the Middle: How Language Models Use Long Contexts. Transactions of the Association for Computational Linguistics (TACL), 12 (2024), 157–173. https://doi.org/10.1162/tacl_a_00638
    [47]
    David Lo, Nachiappan Nagappan, and Thomas Zimmermann. 2015. How Practitioners Perceive the Relevance of Software Engineering Research. In ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). 415–425. https://doi.org/10.1145/2786805.2786809
    [48]
    Shantanu Mandal, Adhrik Chethan, Vahid Janfaza, SM Mahmud, Todd A Anderson, Javier Turek, Jesmin Jahan Tithi, and Abdullah Muzahid. 2023. Large Language Models Based Automatic Synthesis of Software Specifications. arXiv preprint arXiv:2304.09181.
    [49]
    Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. 2024. Reading between the Lines: Modeling User Behavior and Costs in AI-assisted Programming. In ACM CHI Conference on Human Factors in Computing Systems (CHI).
    [50]
    OpenAI. 2023. GPT-4 Technical Report. ArXiv, abs/2303.08774 (2023).
    [51]
    Daniel Pletea, Bogdan Vasilescu, and Alexander Serebrenik. 2014. Security and Emotion: Sentiment Analysis of Security Discussions on GitHub. In Working Conference on Mining Software Repositories (MSR). 348–351. https://doi.org/10.1145/2597073.2597117
    [52]
    Lutz Prechelt. 2021. On Implicit Assumptions Underlying Software Engineering Research. In International Conference on Evaluation and Assessment in Software Engineering (EASE). 336–339. https://doi.org/10.1145/3463274.3463356
    [53]
    Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shotTtext-to-image Generation. In International Conference on Machine Learning (ICML). 8821–8831.
    [54]
    Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, and Jérémy Rapin. 2023. Code LLaMA: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950.
    [55]
    Caitlin Sadowski, Edward Aftandilian, Alex Eagle, Liam Miller-Cushon, and Ciera Jaspan. 2018. Lessons from Building Static Analysis Tools at Google. Commun. ACM, 61, 4 (2018), 58–66. https://doi.org/10.1145/3188720
    [56]
    Morgan Klaus Scheuerman, Katta Spiel, Oliver L Haimson, Foad Hamidi, and Stacy M Branham. 2020. HCI Guidelines for Gender Equity and Inclusivity. In UMBC Faculty Collection.
    [57]
    Marija Selakovic and Michael Pradel. 2016. Performance Issues and Optimizations in JavaScript: An Empirical Study. In IEEE/ACM International Conference on Software Engineering (ICSE). 61–72. https://doi.org/10.1145/2884781.2884829
    [58]
    Mikke Tavast, Anton Kunnari, and Perttu Hämäläinen. 2022. Language Models Can Generate Human-like Self-reports of Emotion. In ACM International Conference on Intelligent User Interfaces (IUI). 69–72. https://doi.org/10.1145/3490100.3516464
    [59]
    Yingchen Tian, Yuxia Zhang, Klaas-Jan Stol, Lin Jiang, and Hui Liu. 2022. What Makes a Good Commit Message? In IEEE/ACM International Conference on Software Engineering (ICSE). 2389–2401. https://doi.org/10.1145/3510003.3510205
    [60]
    Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, and Faisal Azhar. 2023. LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971.
    [61]
    Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, and Shruti Bhosale. 2023. LLaMA 2: Open Foundation and Fine-tuned Chat Models. arXiv preprint arXiv:2307.09288.
    [62]
    David Wadden, Kyle Lo, Lucy Lu Wang, Arman Cohan, Iz Beltagy, and Hannaneh Hajishirzi. 2022. MultiVerS: Improving Scientific Claim Verification with Weak Supervision and Full-document Context. In Findings of the Association for Computational Linguistics: NAACL. 61–76. https://doi.org/10.18653/v1/2022.findings-naacl.6
    [63]
    Bingcheng Wang, Pei-Luen Patrick Rau, and Tianyi Yuan. 2022. Measuring User Competence in Using Artificial Intelligence: Validity and Reliability of Artificial Intelligence Literacy Scale. Behaviour & Information Technology, 1–14. https://doi.org/10.1080/0144929X.2022.2072768
    [64]
    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems, 35 (2022), 24824–24837.
    [65]
    Moshi Wei, Nima Shiri Harzevili, Yuchao Huang, Junjie Wang, and Song Wang. 2022. Clear: Contrastive Learning for API Recommendation. In IEEE/ACM International Conference on Software Engineering (ICSE). 376–387. https://doi.org/10.1145/3510003.3510159
    [66]
    Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, Ziqi Ding, Bill Guo, Sireesh Gururaja, and Tzu-Sheng Kuo. 2023. LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs. arXiv preprint arXiv:2307.10168.
    [67]
    Ziang Xiao, Xingdi Yuan, Q Vera Liao, Rania Abdelghani, and Pierre-Yves Oudeyer. 2023. Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. In ACM International Conference on Intelligent User Interfaces (IUI). 75–78. https://doi.org/10.1145/3581754.3584136
    [68]
    Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A Systematic Evaluation of Large Language Models of Code. In ACM SIGPLAN International Symposium on Machine Programming (MAPS). 1–10. https://doi.org/10.1145/3520312.3534862
    [69]
    Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task. In Conference on Empirical Methods in Natural Language Processing (EMNLP). 3911–3921. https://doi.org/10.18653/v1/D18-1425
    [70]
    Jialu Zhang, Todd Mytkowicz, Mike Kaufman, Ruzica Piskac, and Shuvendu K Lahiri. 2022. Using Pre-trained Language Models to Resolve Textual and Semantic Merge Conflicts (Experience Paper). In ACM International Symposium on Software Testing and Analysis (ISSTA). 77–88. https://doi.org/10.1145/3533767.3534396
    [71]
    Zibin Zheng, Kaiwen Ning, Jiachi Chen, Yanlin Wang, Wenqing Chen, Lianghong Guo, and Weicheng Wang. 2023. Towards an Understanding of Large Language Models in Software Engineering Tasks. arXiv preprint arXiv:2308.11396.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Software Engineering
    Proceedings of the ACM on Software Engineering  Volume 1, Issue FSE
    July 2024
    2770 pages
    EISSN:2994-970X
    DOI:10.1145/3554322
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 July 2024
    Published in PACMSE Volume 1, Issue FSE

    Author Tags

    1. Large language models
    2. empirical software engineering
    3. study replication

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 118
      Total Downloads
    • Downloads (Last 12 months)118
    • Downloads (Last 6 weeks)118
    Reflects downloads up to 12 Aug 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media