Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544902.3546233acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article
Open access

Do Static Analysis Tools Affect Software Quality when Using Test-driven Development?

Published: 19 September 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Background. Test-Driven Development (TDD) is an agile software development practice, which encourages developers to write “quick-and-dirty” production code to make tests pass, and then apply refactoring to “clean” written code. However, previous studies have found that refactoring is not applied as often as the TDD process requires, potentially affecting software quality.
    Aims. We investigated the benefits of leveraging a Static Analysis Tool (SAT)—plugged-in the Integrated Development Environment (IDE)—on software quality, when applying TDD.
    Method. We conducted two controlled experiments, in which the participants—92, in total—performed an implementation task by applying TDD with or without a SAT highlighting the presence of code smells in their source code. We then analyzed the effect of the used SAT on software quality.
    Results. We found that, overall, the use of a SAT helped the participants to significantly improve software quality, yet the participants perceived TDD more difficult to be performed.
    Conclusions. The obtained results may impact: (i) practitioners, helping them improve their TDD practice through the adoption of proper settings and tools; (ii) educators, in better introducing TDD within their courses; and (iii) researchers, interested in developing better tool support for developers, or further studying TDD.

    References

    [1]
    Maurício Aniche. 2015. Java code metrics calculator (CK). Available in https://github.com/mauricioaniche/ck/.
    [2]
    Mauricio Finavaro Aniche and Marco Aurélio Gerosa. 2010. Most Common Mistakes in Test-Driven Development Practice: Results from an Online Survey with Developers. In 2010 Third International Conference on Software Testing, Verification, and Validation Workshops. 469–478. https://doi.org/10.1109/ICSTW.2010.16
    [3]
    Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In 2013 35th International Conference on Software Engineering (ICSE). 931–940. https://doi.org/10.1109/ICSE.2013.6606642
    [4]
    Maria Teresa Baldassarre, Valentina Lenarduzzi, Simone Romano, and Nyyti Saarimäki. 2020. On the diffuseness of technical debt items and accuracy of remediation time when using sonarqube. Information and Software Technology 128 (2020), 106377.
    [5]
    Victor Basili, Gianluigi Caldiera, and H. Dieter Rombach. 2002. Goal Question Metric (GQM) Approach. John Wiley & Sons, Ltd, 528–532. https://doi.org/10.1002/0471028959.sof142
    [6]
    Gabriele Bavota, Rocco Oliveto, Malcom Gethers, Denys Poshyvanyk, and Andrea De Lucia. 2014. Methodbook: Recommending Move Method Refactorings via Relational Topic Models. IEEE Trans. Software Eng. 40, 7 (2014), 671–694. https://doi.org/10.1109/TSE.2013.60
    [7]
    Kent Beck. 2003. Test-driven development: by example. Addison-Wesley.
    [8]
    Karin Becker, Bruno de Souza Costa Pedroso, Marcelo Soares Pimenta, and Ricardo Pezzuol Jacobi. 2015. Besouro: A framework for exploring compliance rules in automatic TDD behavior assessment. Information and Software Technology 57 (2015), 494–508.
    [9]
    Moritz Beller, Radjino Bholanath, Shane McIntosh, and Andy Zaidman. 2016. Analyzing the state of static analysis: A large-scale evaluation in open source software. In 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), Vol. 1. IEEE, 470–481.
    [10]
    Moritz Beller, Georgios Gousios, Annibale Panichella, Sebastian Proksch, Sven Amann, and Andy Zaidman. 2019. Developer Testing in the IDE: Patterns, Beliefs, and Behavior. IEEE Transactions on Software Engineering 45, 3 (2019), 261–284. https://doi.org/10.1109/TSE.2017.2776152
    [11]
    Wilson Bissi, Adolfo Gustavo Serra Seca Neto, and Maria Cláudia Figueiredo Pereira Emer. 2016. The effects of test driven development on internal quality, external quality and productivity: A systematic review. Inf. Softw. Technol. 74(2016), 45–54. https://doi.org/10.1016/j.infsof.2016.02.004
    [12]
    Michael Borenstein, Larry Hedges, Julian Higgins, and Hannah Rothstein. 2009. An Introduction to Meta-Analysis. John Wiley & Sons.
    [13]
    Raymond P. L. Buse and Westley Weimer. 2010. Learning a Metric for Code Readability. IEEE Trans. Software Eng. 36, 4 (2010), 546–558. https://doi.org/10.1109/TSE.2009.70
    [14]
    G. Ann Campbell. 2018. Cognitive Complexity: An Overview and Evaluation. In Proceedings of the 2018 International Conference on Technical Debt (Gothenburg, Sweden) (TechDebt ’18). Association for Computing Machinery, 57–58. https://doi.org/10.1145/3194164.3194186
    [15]
    Jeffrey Carver, Letizia Jaccheri, Sandro Morasca, and Forrest Shull. 2003. Issues in Using Students in Empirical Studies in Software Engineering Education. In Proceedings of International Symposium on Software Metrics. IEEE, 239–249.
    [16]
    Shyam R. Chidamber and Chris. F. Kemerer. 1994. A metrics suite for object oriented design. IEEE Transactions on Software Engineering 20, 6 (1994), 476–493. https://doi.org/10.1109/32.295895
    [17]
    Jacob Cohen. 1992. A power primer.Psychological bulletin 112 (1992), 155–9. Issue 1.
    [18]
    César Couto, João Eduardo Montandon, Christofer Silva, and Marco Tulio Valente. 2013. Static correspondence and correlation between field defects and warnings reported by a bug finding tool. Softw. Qual. J. 21, 2 (2013), 241–257. https://doi.org/10.1007/s11219-011-9172-5
    [19]
    Hakan Erdogmus, Grigori Melnik, and Ron Jeffries. 2010. Test-Driven Development. In Encyclopedia of Software Engineering. Taylor & Francis, 1211–1229.
    [20]
    Martin Fowler. 1999. Refactoring: Improving the Design of Existing Code (1st ed.). Addison-Wesley.
    [21]
    Davide Fucci, Hakan Erdogmus, Burak Turhan, Markku Oivo, and Natalia Juristo. 2017. A Dissection of the Test-Driven Development Process: Does It Really Matter to Test-First or to Test-Last?IEEE Transactions on Software Engineering 43, 7 (2017), 597–614.
    [22]
    Davide Fucci, Giuseppe Scanniello, Simone Romano, and Natalia Juristo. 2018. Need for Sleep: the Impact of a Night of Sleep Deprivation on Novice Developers’ Performance. IEEE Transactions on Software Engineering(2018), 1–1. https://doi.org/10.1109/TSE.2018.2834900
    [23]
    Davide Fucci, Giuseppe Scanniello, Simone Romano, Martin J. Shepperd, Boyce Sigweni, Fernando Uyaguari Uyaguari, Burak Turhan, Natalia Juristo, and Markku Oivo. 2016. An External Replication on the Effects of Test-driven Development Using a Multi-site Blind Analysis Approach. In International Symposium on Empirical Software Engineering and Measurement. 3:1–3:10.
    [24]
    Mohammad Ghafari, Timm Gross, Davide Fucci, and Michael Felderer. 2020. Why Research on Test-Driven Development is Inconclusive?. In Proceedings of the 14th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). Association for Computing Machinery, New York, NY, USA, Article 25, 10 pages. https://doi.org/10.1145/3382494.3410687
    [25]
    Robert J. Grissom and John J. Kim. 2005. Effect sizes for research: A broad practical approach (2nd edition ed.). Lawrence Earlbaum Associates.
    [26]
    Julian P.T. Higgins, James Thomas, Jacqueline Chandler, Miranda Cumpston, Tianjing Li, Matthew J. Page, and Vivian A. Welch. 2019. Cochrane Handbook for Systematic Reviews of Interventions. John Wiley & Sons.
    [27]
    Martin Höst, Björn Regnell, and Claes Wohlin. 2000. Using Students as Subjects—A Comparative Study of Students and Professionals in Lead-Time Impact Assessment. Empir. Softw. Eng. 5, 3 (2000), 201–214.
    [28]
    Andreas Jedlitschka, Marcus Ciolkowski, and Dietmar Pfahl. 2008. Reporting Experiments in Software Engineering. In In Guide to Advanced Empirical Software Engineering. Springer, 201–228.
    [29]
    Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. Why don’t software developers use static analysis tools to find bugs?. In 2013 35th International Conference on Software Engineering (ICSE). 672–681. https://doi.org/10.1109/ICSE.2013.6606613
    [30]
    Natalia Juristo and Ana M. Moreno. 2001. Basics of Software Engineering Experimentation. Kluwer Academic Publishers.
    [31]
    Itir Karac and Burak Turhan. 2018. What Do We (Really) Know about Test-Driven Development?IEEE Softw. 35, 4 (2018), 81–85.
    [32]
    Itir Karac, Burak Turhan, and Natalia Juristo. 2021. A Controlled Experiment with Novice Developers on the Impact of Task Description Granularity on Software Quality in Test-Driven Development. IEEE Transactions on Software Engineering 47, 7 (2021), 1315–1330. https://doi.org/10.1109/TSE.2019.2920377
    [33]
    Marouane Kessentini, Troh Josselin Dea, and Ali Ouni. 2017. A Context-Based Refactoring Recommendation Approach Using Simulated Annealing: Two Industrial Case Studies. In Proceedings of the Genetic and Evolutionary Computation Conference (Berlin, Germany) (GECCO ’17). Association for Computing Machinery, New York, NY, USA, 1303–1310. https://doi.org/10.1145/3071178.3071334
    [34]
    Barbara Kitchenham, Lech Madeyski, and Pearl Brereton. 2020. Meta-analysis for families of experiments in software engineering: a systematic review and reproducibility and validity assessment. Empirical Software Engineering 25, 1 (Jan. 2020), 353–401. https://doi.org/10.1007/s10664-019-09747-0
    [35]
    Barbara Ann Kitchenham, Lech Madeyski, Giuseppe Scanniello, and Carmine Gravino. 2021. The Importance of the Correlation in Crossover Experiments. IEEE Transactions on Software Engineering(2021), 1–1. https://doi.org/10.1109/TSE.2021.3070480
    [36]
    Valentina Lenarduzzi, Francesco Lomio, Heikki Huttunen, and Davide Taibi. 2020. Are sonarqube rules inducing bugs?. In 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 501–511.
    [37]
    Jean-Louis Letouzey. 2012. The SQALE method for evaluating Technical Debt. In 2012 Third International Workshop on Managing Technical Debt (MTD). 31–36. https://doi.org/10.1109/MTD.2012.6225997
    [38]
    Diego Marcilio, Rodrigo Bonifácio, Eduardo Monteiro, Edna Canedo, Welder Luz, and Gustavo Pinto. 2019. Are Static Analysis Violations Really Fixed? A Closer Look at Realistic Usage of SonarQube. In 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC). 209–219. https://doi.org/10.1109/ICPC.2019.00040
    [39]
    Robert C. Martin. 2008. Clean Code: A Handbook of Agile Software Craftsmanship (1st ed.). Prentice Hall.
    [40]
    Marvin Muñoz Barón, Marvin Wyrich, and Stefan Wagner. 2020. An Empirical Validation of Cognitive Complexity as a Measure of Source Code Understandability. In Proceedings of the 14th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)(ESEM ’20). Association for Computing Machinery, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3382494.3410636
    [41]
    Hussan Munir, Misagh Moayyed, and Kai Petersen. 2014. Considering rigor and relevance when evaluating test driven development: A systematic review. Information and Software Technology 56, 4 (2014), 375–394. https://doi.org/10.1016/j.infsof.2014.01.002
    [42]
    Bram Oppenheim. 1992. Questionnaire Design, Interviewing and Attitude Measurement. Pinter Publishers.
    [43]
    Fabio Palomba, Michele Tufano, Gabriele Bavota, Rocco Oliveto, Andrian Marcus, Denys Poshyvanyk, and Andrea De Lucia. 2015. Extract Package Refactoring in ARIES. In 37th IEEE/ACM International Conference on Software Engineering, ICSE 2015, Florence, Italy, May 16-24, 2015, Volume 2. 669–672. https://doi.org/10.1109/ICSE.2015.219
    [44]
    Sebastiano Panichella, Venera Arnaoudova, Massimiliano Di Penta, and Giuliano Antoniol. 2015. Would static analysis tools help developers with code reviews?. In 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER). IEEE, 161–170.
    [45]
    Yahya Rafique and Vojislav B. Mišić. 2013. The Effects of Test-Driven Development on External Quality and Productivity: A Meta-Analysis. IEEE Transactions on Software Engineering 39, 6 (2013), 835–856. https://doi.org/10.1109/TSE.2012.28
    [46]
    Jeanine Romano, Jeffrey D. Kromrey, Jesse. Coraggio, Jeff Skowronek, and Linda Devine. 2006. Appropriate statistics for ordinal level data: Should we really be using t-test and Cohen’sd for evaluating group differences on the NSSE and other surveys?. In Annual Meeting of the Florida Association of Institutional Research, February. 1–3.
    [47]
    Simone Romano. 2022. Do Static Analysis Tools Affect Software Quality when Using Test-driven Development? Online Material. (2022). https://doi.org/10.6084/m9.figshare.20197832.v1
    [48]
    Simone Romano, Davide Fucci, Giuseppe Scanniello, Burak Turhan, and Natalia Juristo. 2017. Findings from a multi-method study on test-driven development. Inf. Softw. Technol. 89(2017), 64–77.
    [49]
    Simone Romano, Giuseppe Scanniello, Carlo Sartiani, and Michele Risi. 2016. A Graph-based Approach to Detect Unreachable Methods in Java Software. In Proceedings of Symposium on Applied Computing. ACM, 1538–1541.
    [50]
    Adrian Santos, Sira Vegas, Markku Oivo, and Natalia Juristo. 2019. A Procedure and Guidelines for Analyzing Groups of Software Engineering Replications. IEEE Trans. Softw. Eng.(2019), 1–1.
    [51]
    Simone Scalabrino, Gabriele Bavota, Christopher Vendome, Mario Linares Vásquez, Denys Poshyvanyk, and Rocco Oliveto. 2017. Automatically assessing code understandability: how far are we?. In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, ASE 2017, Urbana, IL, USA, October 30 - November 03, 2017. 417–427. https://doi.org/10.1109/ASE.2017.8115654
    [52]
    Giuseppe Scanniello, Carmine Gravino, Marcela Genero, José A. Cruz-Lemus, Genoveffa Tortora, Michele Risi, and Gabriella Dodero. 2018. Do software models based on the UML aid in source-code comprehensibility? Aggregating evidence from 12 controlled experiments. Empir. Softw. Eng. 23, 5 (2018), 2695–2733.
    [53]
    Giuseppe Scanniello, Simone Romano, Davide Fucci, Burak Turhan, and Natalia Juristo. 2016. Students’ and Professionals’ Perceptions of Test-Driven Development: A Focus Group Study. In Proceedings of the 31st Annual ACM Symposium on Applied Computing (Pisa, Italy) (SAC ’16). Association for Computing Machinery, New York, NY, USA, 1422–1427. https://doi.org/10.1145/2851613.2851778
    [54]
    Forrest Shull, Grigori Melnik, Burak Turhan, Lucas Layman, Madeline Diep, and Hakan Erdogmus. 2010. What Do We Know about Test-Driven Development?IEEE Software 27, 6 (2010), 16–19. https://doi.org/10.1109/MS.2010.152
    [55]
    Devarshi Singh, Varun Ramachandra Sekar, Kathryn T. Stolee, and Brittany Johnson. 2017. Evaluating how static analysis tools can reduce code review effort. In 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). 101–105. https://doi.org/10.1109/VLHCC.2017.8103456
    [56]
    SonarSource SA. 2022. SonarLint. https://www.sonarlint.org. (Last access: 02/05/2022).
    [57]
    Jaime Spacco, David Hovemeyer, and William Pugh. 2006. Tracking defect warnings across versions. In Proceedings of the 2006 international workshop on Mining software repositories. 133–136.
    [58]
    Ayse Tosun, Oscar Dieste, Davide Fucci, Sira Vegas, Burak Turhan, Hakan Erdogmus, Adrian Santos, Markku Oivo, Kimmo Toro, Janne Jarvinen, and Natalia Juristo. 2017. An industry experiment on the effects of test-driven development on external quality and productivity. Empirical Software Engineering 22, 6 (2017), 2763–2805.
    [59]
    Nikolaos Tsantalis and Alexander Chatzigeorgiou. 2009. Identification of Extract Method Refactoring Opportunities. In 13th European Conference on Software Maintenance and Reengineering, CSMR 2009, Architecture-Centric Maintenance of Large-SCale Software Systems, Kaiserslautern, Germany, 24-27 March 2009. 119–128. https://doi.org/10.1109/CSMR.2009.23
    [60]
    Nikolaos Tsantalis and Alexander Chatzigeorgiou. 2009. Identification of Move Method Refactoring Opportunities. IEEE Trans. Software Eng. 35, 3 (2009), 347–367. https://doi.org/10.1109/TSE.2009.1
    [61]
    Nikolaos Tsantalis and Alexander Chatzigeorgiou. 2010. Identification of refactoring opportunities introducing polymorphism. J. Syst. Softw. 83, 3 (2010), 391–404. https://doi.org/10.1016/j.jss.2009.09.017
    [62]
    Nikolaos Tsantalis, Ameya Ketkar, and Danny Dig. 2022. RefactoringMiner 2.0. IEEE Transactions on Software Engineering 48, 03 (2022), 930–950. https://doi.org/10.1109/TSE.2020.3007722
    [63]
    Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, Harald C. Gall, and Andy Zaidman. 2020. How developers engage with static analysis tools in different contexts. Empir. Softw. Eng. 25, 2 (2020), 1419–1457. https://doi.org/10.1007/s10664-019-09750-5
    [64]
    Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, Andy Zaidman, and Harald C. Gall. 2018. Context is king: The developer perspective on the usage of static analysis tools. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). 38–49. https://doi.org/10.1109/SANER.2018.8330195
    [65]
    Sira Vegas, Cecilia Apa, and Natalia Juristo. 2016. Crossover Designs in Software Engineering Experiments: Benefits and Perils. IEEE Transactions on Software Engineering 42, 2 (2016), 120–135. https://doi.org/10.1109/TSE.2015.2467378
    [66]
    William C. Wake. 2003. Refactoring Workbook(1st ed.). Addison-Wesley.
    [67]
    Anne Whitehead. 2003. Meta‐Analysis Of Controlled Clinical Trials. John Wiley & Sons.
    [68]
    Claes Wohlin, Per Runeson, Martin Höst, Magnus C. Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in Software Engineering. Springer.
    [69]
    Sihan Xu, Aishwarya Sivaraman, Siau-Cheng Khoo, and Jing Xu. 2017. GEMS: An Extract Method Refactoring Recommender. In 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). 24–34. https://doi.org/10.1109/ISSRE.2017.35
    [70]
    Fiorella Zampetti, Simone Scalabrino, Rocco Oliveto, Gerardo Canfora, and Massimiliano Di Penta. 2017. How open source projects use static code analysis tools in continuous integration pipelines. In Proceedings of the 14th International Conference on Mining Software Repositories, MSR 2017, Buenos Aires, Argentina, May 20-28, 2017. 334–344. https://doi.org/10.1109/MSR.2017.2
    [71]
    Jiang Zheng, Laurie A. Williams, Nachiappan Nagappan, Will Snipes, John P. Hudepohl, and Mladen A. Vouk. 2006. On the value of static analysis for fault detection in software. IEEE Transactions on Software Engineering 32, 4 (2006), 240–253. https://doi.org/10.1109/TSE.2006.38

    Cited By

    View all
    • (2024)A Kafka-Based Robot Automation Testing Using Genetic AlgorithmAETA 2022—Recent Advances in Electrical Engineering and Related Sciences: Theory and Application10.1007/978-981-99-8703-0_25(297-308)Online publication date: 2-Mar-2024
    • (2023)Managing Vulnerabilities in Software Projects: the Case of NTT Data2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00046(247-253)Online publication date: 6-Sep-2023
    • (2023)Test-Driven Development and Embedded Systems: An Exploratory Investigation2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00045(239-246)Online publication date: 6-Sep-2023
    • Show More Cited By

    Index Terms

    1. Do Static Analysis Tools Affect Software Quality when Using Test-driven Development?

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ESEM '22: Proceedings of the 16th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement
      September 2022
      318 pages
      ISBN:9781450394277
      DOI:10.1145/3544902
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 September 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Refactoring
      2. Static Analysis Tool
      3. Test-driven Development

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      ESEM '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 130 of 594 submissions, 22%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)356
      • Downloads (Last 6 weeks)45

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Kafka-Based Robot Automation Testing Using Genetic AlgorithmAETA 2022—Recent Advances in Electrical Engineering and Related Sciences: Theory and Application10.1007/978-981-99-8703-0_25(297-308)Online publication date: 2-Mar-2024
      • (2023)Managing Vulnerabilities in Software Projects: the Case of NTT Data2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00046(247-253)Online publication date: 6-Sep-2023
      • (2023)Test-Driven Development and Embedded Systems: An Exploratory Investigation2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00045(239-246)Online publication date: 6-Sep-2023
      • (2023)On the Use of Static Analysis to Engage Students with Software Quality Improvement: An Experience with PMDProceedings of the 45th International Conference on Software Engineering: Software Engineering Education and Training10.1109/ICSE-SEET58685.2023.00023(179-191)Online publication date: 17-May-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media