Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/2487085.2487156guideproceedingsArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article
Free access

Strategies for avoiding text fixture smells during software evolution

Published: 18 May 2013 Publication History

Abstract

An important challenge in creating automated tests is how to design test fixtures, i.e., the setup code that initializes the system under test before actual automated testing can start. Test designers have to choose between different approaches for the setup, trading off maintenance overhead with slow test execution. Over time, test code quality can erode and test smells can develop, such as the occurrence of overly general fixtures, obscure in-line code and dead fields. In this paper, we investigate how fixture-related test smells evolve over time by analyzing several thousand revisions of five open source systems. Our findings indicate that setup management strategies strongly influence the types of test fixture smells that emerge in code, and that several types of fixture smells often emerge at the same time. Based on this information, we recommend important guidelines for setup strategies, and suggest how tool support can be improved to help in both avoiding the emergence of such smells as well as how to refactor code when test smells do appear.

References

[1]
A. Zaidman, B. Van Rompaey, A. van Deursen, and S. Demeyer, “Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining,” Empirical Software Engineering, vol. 16, no. 3, pp. 325–364, 2011.
[2]
G. Meszaros, xUnit Test Patterns: Refactoring Test Code. Addison-Wesley, 2007.
[3]
S. Freeman and N. Pryce, Growing Object-Oriented Software, Guided by Tests, 1st ed. Addison-Wesley, 2009.
[4]
K. Beck, Test Driven Development: By Example. Addison-Wesley, 2002.
[5]
E. Gamma and K. Beck, Contributing to Eclipse: Principles, Patterns, and Plugins. Addison-Wesley, 2003.
[6]
A. van Deursen, L. Moonen, A. v. d. Bergh, and G. Kok, “Refactoring test code,” in Proc. of the Int’l Conf. on Extreme Programming and Flexible Processes (XP). University of Cagliari, 2001, pp. 92–95.
[7]
M. Greiler, A. van Deursen, and M.-A. Storey, “Automated detection of test fixture strategies and smells,” in Proc. of the Int’l Conf. on Software Testing, Verification and Validation (ICST). IEEE CS, 2013.
[8]
M. Fowler, Refactoring: improving the design of existing code. Addison-Wesley, 1999.
[9]
B. Van Rompaey, B. Du Bois, and S. Demeyer, “Characterizing the relative significance of a test smell,” in Proc. of the Int’l Conf. on Software Maintenance (ICSM). IEEE CS, 2006, pp. 391–400.
[10]
H. Neukirchen and M. Bisanz, “Utilising code smells to detect quality problems in ttcn-3 test suites,” in Proc. of the Int’l Conf. on Testing of Communicating Systems and the Int’l Workshop on Formal Approaches to Testing of Software (TestCom/FATES). Springer, 2007, pp. 228–243.
[11]
R. C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship, 1st ed. Prentice Hall, 2008.
[12]
S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Trans. on Softw. Engineering, vol. 20, no. 6, pp. 476–493, 1994.
[13]
W. Li and S. Henry, “Object-oriented metrics that predict maintainability,” Journal of Systems and Software, vol. 23, no. 2, pp. 111–122, 1993.
[14]
B. Henderson-Sellers, Object-oriented metrics: measures of complexity. Prentice-Hall, 1996.
[15]
M. M. Lehman, “On understanding laws, evolution, and conservation in the large-program life cycle,” Journal of Systems and Software, vol. 1, pp. 213–221, 1984.
[16]
W. G. Hopkins, A new view of statistics. Internet Society for Sport Science, 2000.
[17]
R. Peters and A. Zaidman, “Evaluating the lifespan of code smells using software repository mining,” in Proc. European Conf. on Software Maintenance and Reengineering (CSMR). IEEE, 2012, pp. 411–416.
[18]
B. Van Rompaey, B. Du Bois, S. Demeyer, and M. Rieger, “On the detection of test smells: A metrics-based approach for general fixture and eager test,” IEEE Trans. on Softw. Engineering, vol. 33, no. 12, pp. 800–817, 2007.
[19]
M. Breugelmans and B. Van Rompaey, “TestQ: Exploring structural and maintenance characteristics of unit test suites,” in 1st Int’l Workshop on Academic Software Development Tools and Techniques, 2008.
[20]
R. Borg and M. Kropp, “Automated acceptance test refactoring,” in Proceedings of the 4th Workshop on Refactoring Tools (WRT). ACM, 2011, pp. 15–21.
[21]
L. S. Pinto, S. Sinha, and A. Orso, “Understanding myths and realities of test-suite evolution,” in Proc. Int’l Symposium on the Foundations of Software Engineering (FSE). ACM, 2012, pp. 33:1–33:11.
[22]
M. Galli, M. Lanza, and O. Nierstrasz, “Towards a Taxonomy of SUnit Tests ?” in International Smalltalk Conference, 2004.
[23]
N. Moha, Y.-G. Guéhéneuc, L. Duchien, and A.-F. Le Meur, “Decor: A method for the specification and detection of code and design smells,” IEEE Trans. on Softw. Engineering, vol. 36, no. 1, pp. 20–36, 2010.
[24]
M. Lanza and R. Marinescu, Object-Oriented Metrics in Practice - Using Software Metrics to Characterize, Evaluate, and Improve the Design of Object-Oriented Systems. Springer, 2006.
[25]
R. Marinescu, “Detecting design flaws via metrics in object-oriented systems,” in Proc. of the Int’l Conf. on Technology of Object-Oriented Languages and Systems (TOOLS). IEEE CS, 2001, pp. 173–182.

Cited By

View all
  • (2024)Using Large Language Models to Generate JUnit Tests: An Empirical StudyProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661216(313-322)Online publication date: 18-Jun-2024
  • (2020)LCCSSProceedings of the 14th Brazilian Symposium on Software Components, Architectures, and Reuse10.1145/3425269.3425283(91-100)Online publication date: 19-Oct-2020
  • (2020)tsDetect: an open source test smells detection toolProceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3417921(1650-1654)Online publication date: 8-Nov-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
MSR '13: Proceedings of the 10th Working Conference on Mining Software Repositories
May 2013
438 pages
ISBN:9781467329361

Publisher

IEEE Press

Publication History

Published: 18 May 2013

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)30
  • Downloads (Last 6 weeks)4
Reflects downloads up to 23 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Using Large Language Models to Generate JUnit Tests: An Empirical StudyProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661216(313-322)Online publication date: 18-Jun-2024
  • (2020)LCCSSProceedings of the 14th Brazilian Symposium on Software Components, Architectures, and Reuse10.1145/3425269.3425283(91-100)Online publication date: 19-Oct-2020
  • (2020)tsDetect: an open source test smells detection toolProceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3417921(1650-1654)Online publication date: 8-Nov-2020
  • (2019)SolUnitProceedings of the 29th Annual International Conference on Computer Science and Software Engineering10.5555/3370272.3370300(264-273)Online publication date: 4-Nov-2019
  • (2019)SoCRATESProceedings of the Tenth ACM SIGPLAN Symposium on Scala10.1145/3337932.3338815(22-26)Online publication date: 17-Jul-2019
  • (2019)Reducing the execution time of unit tests of smart contracts in blockchain platformsProceedings of the XV Brazilian Symposium on Information Systems10.1145/3330204.3330225(1-8)Online publication date: 20-May-2019
  • (2019)A new dimension of test quality: assessing and generating higher quality unit test casesProceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3293882.3338984(419-423)Online publication date: 10-Jul-2019
  • (2019)Assessing diffusion and perception of test smells in scala projectsProceedings of the 16th International Conference on Mining Software Repositories10.1109/MSR.2019.00072(457-467)Online publication date: 26-May-2019
  • (2019)Mock objects for testing java systemsEmpirical Software Engineering10.1007/s10664-018-9663-024:3(1461-1498)Online publication date: 1-Jun-2019
  • (2016)An empirical investigation into the nature of test smellsProceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering10.1145/2970276.2970340(4-15)Online publication date: 25-Aug-2016
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media