Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Some Seeds Are Strong: Seeding Strategies for Search-based Test Case Selection

Published: 13 February 2023 Publication History

Abstract

The time it takes software systems to be tested is usually long. Search-based test selection has been a widely investigated technique to optimize the testing process. In this article, we propose a set of seeding strategies for the test case selection problem that generates the initial population of Pareto-based multi-objective algorithms, with the goals of (1) helping to find an overall better set of solutions and (2) enhancing the convergence of the algorithms. The seeding strategies were integrated with four state-of-the-art multi-objective search algorithms and applied into two contexts where regression-testing is paramount: (1) Simulation-based testing of Cyber-physical Systems and (2) Continuous Integration. For the first context, we evaluated our approach by using six fitness function combinations and six independent case studies, whereas in the second context, we derived a total of six fitness function combinations and employed four case studies. Our evaluation suggests that some of the proposed seeding strategies are indeed helpful for solving the multi-objective test case selection problem. Specifically, the proposed seeding strategies provided a higher convergence of the algorithms towards optimal solutions in 96% of the studied scenarios and an overall cost-effectiveness with a standard search budget in 85% of the studied scenarios.

References

[1]
Shaukat Ali, Lionel C. Briand, Hadi Hemmati, and Rajwinder Kaur Panesar-Walawege. 2010. A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans. Softw. Eng. 36, 6 (2010), 742–762.
[2]
M. Moein Almasi, Hadi Hemmati, Gordon Fraser, Andrea Arcuri, and Jānis Benefelds. 2017. An industrial evaluation of unit test generation: Finding real faults in a financial application. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track. IEEE Press, 263–272.
[3]
Andrea Arcuri and Lionel Briand. 2011. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceedings of the 33rd International Conference on Software Engineering (ICSE’11). IEEE, 1–10.
[4]
Andrea Arcuri and Gordon Fraser. 2014. On the effectiveness of whole test suite generation. In Proceedings of the International Symposium on Search-based Software Engineering. Springer, 1–15.
[5]
Andrea Arcuri, David Robert White, John Clark, and Xin Yao. 2008. Multi-objective improvement of software using co-evolution and smart seeding. In Proceedings of the Asia-Pacific Conference on Simulated Evolution and Learning. Springer, 61–70.
[6]
Aitor Arrieta, Joseba Andoni Agirre, and Goiuria Sagardui. 2020. Seeding strategies for multi-objective test case selection: An application on simulation-based testing. In Proceedings of the Genetic and Evolutionary Computation Conference. 1222–1231.
[7]
Aitor Arrieta, Shuai Wang, Ainhoa Arruabarrena, Urtzi Markiegi, Goiuria Sagardui, and Leire Etxeberria. 2018. Multi-objective black-box test case selection for cost-effectively testing simulation models. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’18). ACM, New York, NY, 1411–1418. DOI:DOI:
[8]
Aitor Arrieta, Shuai Wang, Urtzi Markiegi, Ainhoa Arruabarrena, Leire Etxeberria, and Goiuria Sagardui. 2019. Pareto efficient multi-objective black-box test case selection for simulation-based testing. Inf. Softw. Technol. 114 (2019), 137–154. DOI:DOI:
[9]
Aitor Arrieta, Shuai Wang, Urtzi Markiegi, Goiuria Sagardui, and Leire Etxeberria. 2017. Search-based test case generation for cyber-physical systems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’17). 688–697.
[10]
Aitor Arrieta, Shuai Wang, Urtzi Markiegi, Goiuria Sagardui, and Leire Etxeberria. 2018. Employing multi-objective search to enhance reactive test case generation and prioritization for testing industrial cyber-physical systems. IEEE Trans. Industr. Inform. 14, 3 (2018), 1055–1066.
[11]
Aitor Arrieta, Shuai Wang, Goiuria Sagardui, and Leire Etxeberria. 2016. Search-based test case selection of cyber-physical system product lines for simulation-based validation. In Proceedings of the 20th International Systems and Software Product Line Conference. 297–306.
[12]
Aitor Arrieta, Shuai Wang, Goiuria Sagardui, and Leire Etxeberria. 2016. Test case prioritization of configurable cyber-physical systems with weight-based search algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’16). ACM, New York, NY, 1053–1060. DOI:DOI:
[13]
Aitor Arrieta, Shuai Wang, Goiuria Sagardui, and Leire Etxeberria. 2019. Search-based test case prioritization for simulation-based testing of cyber-physical system product lines. J. Syst. Softw. 149 (2019), 1–34. DOI:DOI:
[14]
Wesley Klewerton Guez Assunção, Thelma Elita Colanzi, Silvia Regina Vergilio, and Aurora Pozo. 2014. A multi-objective optimization approach for the integration and test order problem. Inf. Sci. 267 (2014), 119–139.
[15]
Mojtaba Bagherzadeh, Nafiseh Kahani, and Lionel Briand. 2021. Reinforcement learning for test case prioritization. IEEE Trans. Softw. Eng. (2021).
[16]
Raja Ben Abdessalem, Shiva Nejati, Lionel C. Briand, and Thomas Stifter. 2018. Testing vision-based control systems using learnable evolutionary algorithms. In Proceedings of the 40th International Conference on Software Engineering (ICSE’18). 12.
[17]
David Benavides, Sergio Segura, and Antonio Ruiz-Cortés. 2010. Automated analysis of feature models 20 years later: A literature review. Inf. Syst. 35, 6 (2010), 615–636.
[18]
Lionel Briand, Shiva Nejati, Mehrdad Sabetzadeh, and Domenico Bianculli. 2016. Testing the untestable: Model testing of complex software-intensive systems. In Proceedings of the 38th International Conference on Software Engineering (ICSE’16). ACM, 789–792. DOI:DOI:
[19]
J. Brownlee. 2012. Clever Algorithms: Nature-Inspired Programming Recipes. Retrieved from lulu.com.
[20]
Jose Campos, Yan Ge, Gordon Fraser, Marcello Eler, and Andrea Arcuri. 2017. An empirical evaluation of evolutionary algorithms for test suite generation. In Proceedings of the Symposium on Search-based Software Engineering.
[21]
Tao Chen, Miqing Li, and Xin Yao. 2018. On the effects of seeding strategies: A case for search-based multi-objective service composition. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 1419–1426.
[22]
Tao Chen, Miqing Li, and Xin Yao. 2019. Standing on the shoulders of giants: Seeding search-based multi-objective optimization with prior knowledge for software service composition. Inf. Softw. Technol. 114 (2019), 155–175.
[23]
Tsong Yueh Chen, F.-C. Kuo, Robert G. Merkel, and Sebastian P. Ng. 2004. Mirror adaptive random testing. Inf. Softw. Technol. 46, 15 (2004), 1001–1010.
[24]
Tsong Yueh Chen, Fei-Ching Kuo, Robert G. Merkel, and T. H. Tse. 2010. Adaptive random testing: The art of test case diversity. J. Syst. Softw. 83, 1 (2010), 60–66.
[25]
Tsong Yueh Chen, Hing Leung, and I. K. Mak. 2004. Adaptive random testing. In Proceedings of the Annual Asian Computing Science Conference. Springer, 320–329.
[26]
Yiqun T. Chen, Rahul Gopinath, Anita Tadakamalla, Michael D. Ernst, Reid Holmes, Gordon Fraser, Paul Ammann, and René Just. 2020. Revisiting the relationship between fault detection, test adequacy criteria, and test set size. In Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 237–249.
[27]
Ankur Choudhary, Arun Prakash Agrawal, and Arvinder Kaur. 2018. An effective approach for regression test case selection using Pareto based multi-objective harmony search. In Proceedings of the 11th International Workshop on Search-based Software Testing. ACM, 13–20.
[28]
Shafiul Azam Chowdhury, Soumik Mohian, Sidharth Mehra, Siddhant Gawsane, Taylor T. Johnson, and Christoph Csallner. 2018. Automatically finding bugs in a commercial cyber-physical system development tool chain with SLforge. In Proceedings of the 40th International Conference on Software Engineering. ACM, 981–992.
[29]
David W. Corne, Nick R. Jerram, Joshua D. Knowles, and Martin J. Oates. 2001. PESA-II: Region-based selection in evolutionary multiobjective optimization. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation. Morgan Kaufmann Publishers Inc., 283–290.
[30]
Emilio Cruciani, Breno Miranda, Roberto Verdecchia, and Antonia Bertolino. 2019. Scalable approaches for test suite reduction. In Proceedings of the IEEE/ACM 41st International Conference on Software Engineering (ICSE’19). IEEE, 419–429.
[31]
Andrea De Lucia, Massimiliano Di Penta, Rocco Oliveto, and Annibale Panichella. 2012. On the role of diversity measures for multi-objective test case selection. In Proceedings of the 7th International Workshop on Automation of Software Test. IEEE Press, 145–151.
[32]
Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and Tamt Meyarivan. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evolut. Computat. 6, 2 (2002), 182–197.
[33]
Daniel Di Nardo, Nadia Alshahwan, Lionel Briand, and Yvan Labiche. 2015. Coverage-based regression test case selection, minimization and prioritization: A case study on an industrial system. Softw. Test., Verif. Reliab. 25, 4 (2015), 371–396.
[34]
Sebastian Elbaum, Gregg Rothermel, and John Penix. 2014. Techniques for improving regression testing in continuous integration development environments. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’14). ACM, 235–245.
[35]
Emelie Engström, Per Runeson, and Mats Skoglund. 2010. A systematic review on regression test selection techniques. Inf. Softw. Technol. 52, 1 (2010), 14–30.
[36]
Michael G. Epitropakis, Shin Yoo, Mark Harman, and Edmund K. Burke. 2015. Empirical evaluation of Pareto efficient multi-objective regression test case prioritisation. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’15). ACM, New York, NY, 234–245.
[37]
Robert Feldt, Simon M. Poulding, David Clark, and Shin Yoo. 2016. Test set diameter: Quantifying the diversity of sets of test cases. In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation. 223–233. DOI:DOI:
[38]
Javier Ferrer, Peter M. Kruse, Francisco Chicano, and Enrique Alba. 2012. Evolutionary algorithm for prioritized pairwise test data generation. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation (GECCO’12). ACM, New York, NY, 1213–1220. DOI:DOI:
[39]
Gordon Fraser and Andrea Arcuri. 2012. The seed is strong: Seeding strategies in search-based software testing. In Proceedings of the IEEE 5th International Conference on Software Testing, Verification and Validation. IEEE, 121–130.
[40]
Gordon Fraser and Andrea Arcuri. 2013. Whole test suite generation. IEEE Trans. Softw. Eng. 39, 2 (2013), 276–291.
[41]
Vahid Garousi, Ramazan Özkan, and Aysu Betin-Can. 2018. Multi-objective regression test selection in practice: An empirical study in the defense software industry. Inf. Softw. Technol. 103 (2018), 40–54.
[42]
D. Greer and G. Ruhe. 2004. Software release planning: An evolutionary and iterative approach. Inf. Softw. Technol. 46, 4 (2004), 243–253.
[43]
Le Thi My Hanh, Nguyen Thanh Binh, and Khuat Thanh Tung. 2016. A novel fitness function of metaheuristic algorithms for test data generation for Simulink models based on mutation analysis. J. Syst. Softw. 120, C (2016), 17–30.
[44]
Mary Jean Harrold, James A. Jones, Tongyu Li, Donglin Liang, Alessandro Orso, Maikel Pennings, Saurabh Sinha, S. Alexander Spoon, and Ashish Gujarathi. 2001. Regression test selection for Java software. In ACM SIGPLAN Notices, Vol. 36. ACM, 312–326.
[45]
Hadi Hemmati, Andrea Arcuri, and Lionel Briand. 2013. Achieving scalable model-based testing through test case diversity. IEEE Trans. Softw. Eng. Methodol. 22, 1 (2013), 6:1–6:42.
[46]
Hadi Hemmati and Lionel Briand. 2010. An industrial investigation of similarity measures for model-based test case selection. In Proceedings of the IEEE 21st International Symposium on Software Reliability Engineering (ISSRE’10). IEEE, 141–150.
[47]
Christopher Henard, Mike Papadakis, and Yves Le Traon. 2014. Mutation-based generation of software product line test configurations. In Proceedings of the International Symposium on Search-based Software Engineering. Springer, 92–106.
[48]
René Just, Darioush Jalali, Laura Inozemtseva, Michael D. Ernst, Reid Holmes, and Gordon Fraser. 2014. Are mutants a valid substitute for real faults in software testing? In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 654–665.
[49]
Eric Knauss, Miroslaw Staron, Wilhelm Meding, Ola Söder, Agneta Nilsson, and Magnus Castell. 2015. Supporting continuous integration by code-churn based test selection. In Proceedings of the 2nd International Workshop on Rapid Continuous Software Engineering (RCoSE’15). IEEE Press, 19–25.
[50]
Remo Lachmann, Michael Felderer, Manuel Nieke, Sandro Schulze, Christoph Seidl, and Ina Schaefer. 2017. Multi-objective black-box test case selection for system testing. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 1311–1318.
[51]
Khuat Thanh Le Thi My Hanh and Nguyen Thanh Binh Tung. 2014. Mutation-based test data generation for Simulink models using genetic algorithm and simulated annealing. Int. J. Comput. Inf. Technol. 3, 04 (2014), 763–771.
[52]
Xuelin Li, W. Eric Wong, Ruizhi Gao, Linghuan Hu, and Shigeru Hosono. 2017. Genetic algorithm-based test generation for software product line with the integration of fault localization techniques. Empir. Softw. Eng. (2017), 1–51. DOI:DOI:
[53]
Zheng Li, Mark Harman, and Robert M. Hierons. 2007. Search algorithms for regression test case prioritization. IEEE Trans. Softw. Eng. 33, 4 (2007), 225–237.
[54]
Jingjing Liang, Sebastian Elbaum, and Gregg Rothermel. 2018. Redefining prioritization: Continuous prioritization for continuous integration. In Proceedings of the 40th International Conference on Software Engineering. 688–698.
[55]
Bing Liu, Lucia, Shiva Nejati, Lionel C. Briand, and Thomas Bruckmann. 2016. Simulink fault localization: An iterative statistical debugging approach. Softw. Test., Verif. Reliab. 26, 6 (2016), 431–459.
[56]
Bing Liu, Lucia Lucia, Shiva Nejati, and Lionel Briand. 2017. Improving fault localization for Simulink models using search-based testing and prediction models. In Proceedings of the 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER’17).
[57]
Bing Liu, Shiva Nejati, Lionel C. Briand, et al. 2018. Effective fault localization of automotive Simulink models: Achieving the trade-off between test oracle effort and fault localization accuracy. Empir. Softw. Eng. (2018), 1–47.
[58]
Roberto E. Lopez-Herrejon, Javier Ferrer, Francisco Chicano, Alexander Egyed, and Enrique Alba. 2014. Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of software product lines. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’14). IEEE, 387–396.
[59]
Wei Ma, Mike Papadakis, Anestis Tsakmalis, Maxime Cordy, and Yves Le Traon. 2021. Test selection for deep learning systems. IEEE Trans. Softw. Eng. Methodol. 30, 2 (2021), 1–22.
[60]
Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: Multi-objective automated testing for Android applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis. ACM, 94–105.
[61]
Dusica Marijan, Arnaud Gotlieb, and Sagar Sen. 2013. Test case prioritization for continuous regression testing: An industrial case study. In Proceedings of the IEEE International Conference on Software Maintenance (ICSM’13). IEEE Computer Society, 540–543.
[62]
Reza Matinnejad, Shiva Nejati, Lionel Briand, Thomas Bruckmann, and Claude Poull. 2015. Search-based automated testing of continuous controllers: Framework, tool support, and case studies. Inf. Softw. Technol. 57 (2015), 705–722.
[63]
Reza Matinnejad, Shiva Nejati, and Lionel C. Briand. 2017. Automated testing of hybrid Simulink/Stateflow controllers: Industrial case studies. In Proceedings of the 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’17). ACM, New York, NY, 938–943. DOI:DOI:
[64]
Reza Matinnejad, Shiva Nejati, Lionel C. Briand, and Thomas Bruckmann. 2015. Effective test suites for mixed discrete-continuous stateflow controllers. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering. ACM, 84–95.
[65]
Reza Matinnejad, Shiva Nejati, Lionel C. Briand, and Thomas Bruckmann. 2016. Automated test suite generation for time-continuous Simulink models. In Proceedings of the 38th International Conference on Software Engineering (ICSE’16). ACM, New York, NY, 595–606.
[66]
Reza Matinnejad, Shiva Nejati, Lionel C. Briand, and Thomas Bruckmann. 2019. Test generation and test prioritization for Simulink models with dynamic behavior. IEEE Trans. Softw. Eng. 45, 9 (2019), 919–944. DOI:DOI:
[67]
Phil McMinn. 2004. Search-based software test data generation: A survey. Softw. Test., Verif. Reliab. 14, 2 (2004), 105–156.
[68]
Claudio Menghi, Shiva Nejati, Lionel C. Briand, and Yago Isasi Parache. 2020. Approximation-refinement testing of compute-intensive cyber-physical models: An approach based on system identification. In Proceedings of the International Conference on Software Engineering (ICSE’20).
[69]
Claudio Menghi, Shiva Nejati, Khouloud Gaaloul, and Lionel C. Briand. 2019. Generating automated and online test oracles for Simulink models with continuous and uncertain behaviors. In Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 27–38. DOI:DOI:
[70]
Debajyoti Mondal, Hadi Hemmati, and Stephane Durocher. 2015. Exploring test suite diversification and code coverage in multi-objective test case selection. In Proceedings of the IEEE 8th International Conference on Software Testing, Verification and Validation (ICST’15). IEEE, 1–10.
[71]
Douglas C. Montgomery. 2017. Design and Analysis of Experiments. John Wiley & Sons.
[72]
Shiva Nejati, Khouloud Gaaloul, Claudio Menghi, Lionel C. Briand, Stephen Foster, and David Wolfe. 2019. Evaluating model testing and model checking for finding requirements violations in Simulink models. In Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1015–1025. DOI:DOI:
[73]
Ramazan Özkan, Vahid Garousi, and Aysu Betin-Can. 2017. Multi-objective regression test selection in practice: An empirical study in the defense software industry. In Proceedings of the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM).
[74]
Annibale Panichella, Fitsum Kifetew, and Paolo Tonella. 2017. Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. 44, 2 (2017), 122–158.
[75]
Annibale Panichella, Rocco Oliveto, Massimiliano Di Penta, and Andrea De Lucia. 2015. Improving multi-objective test case selection by injecting diversity in genetic algorithms. IEEE Trans. Softw. Eng. 41, 4 (2015), 358–383.
[76]
Mike Papadakis, Yue Jia, Mark Harman, and Yves Le Traon. 2015. Trivial compiler equivalence: A large scale empirical study of a simple, fast and effective equivalent mutant detection technique. In Proceedings of the 37th International Conference on Software Engineering. IEEE Press, 936–946.
[77]
P. Victer Paul, N. Moganarangan, S. Sampath Kumar, R. Raju, T. Vengattaraman, and P. Dhavachelvan. 2015. Performance analyses over population seeding techniques of the permutation-coded genetic algorithm: An empirical study based on traveling salesman problems. Appl. Soft Comput. 32 (2015), 383–402.
[78]
P. Victer Paul, A. Ramalingam, Ramachandran Baskaran, P. Dhavachelvan, K. Vivekanandan, and R. Subramanian. 2014. A new population seeding technique for permutation-coded genetic algorithm: Service transfer approach. J. Computat. Sci. 5, 2 (2014), 277–297.
[79]
P. Victer Paul, A. Ramalingam, R. Baskaran, P. Dhavachelvan, K. Vivekanandan, R. Subramanian, and V. S. K. Venkatachalapathy. 2013. Performance analyses on population seeding techniques for genetic algorithms. Int. J. Eng. Technol. 5, 3 (2013), 2993–3000.
[80]
Gilles Perrouin, Sebastian Oster, Sagar Sen, Jacques Klein, Benoit Baudry, and Yves Le Traon. 2012. Pairwise testing for software product lines: Comparison of two approaches. Softw. Qual. J. 20, 3-4 (2012), 605–643.
[81]
Dipesh Pradhan, Shuai Wang, Shaukat Ali, and Tao Yue. 2016. Search-based cost-effective test case selection within a time budget: An empirical study. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’16). ACM, New York, NY, 1085–1092. DOI:DOI:
[82]
Dipesh Pradhan, Shuai Wang, Shaukat Ali, Tao Yue, and Marius Liaaen. 2018. REMAP: Using rule mining and multi-objective search for dynamic test case prioritization. In Proceedings of the IEEE 11th International Conference on Software Testing, Verification and Validation (ICST’18). IEEE, 46–57.
[83]
Dipesh Pradhan, Shuai Wang, Shaukat Ali, Tao Yue, and Marius Liaaen. 2019. Employing rule mining and multi-objective search for dynamic test case prioritization. J. Syst. Softw. 153 (2019), 86–104.
[84]
José Miguel Rojas, Gordon Fraser, and Andrea Arcuri. 2016. Seeding strategies in search-based unit test generation. Softw. Test., Verif. Reliab. 26, 5 (2016), 366–401.
[85]
Jeanine Romano, Jeffrey Kromrey, Jesse Coraggio, Jeff Skowronek, and Linda Devine. 2006. Exploring methods for evaluating group differences on the NSSE and other surveys: Are the t-test and Cohen’s d indices the most appropriate choices? In Proceedings of the Annual Meeting of the Southern Association for Institutional Research.
[86]
Gregg Rothermel, Mary Jean Harrold, and Jeinay Dedhia. 2000. Regression test selection for C++ software. Softw. Test., Verif. Reliab. 10, 2 (2000), 77–109.
[87]
Alireza Salahirad, Hussein Almulla, and Gregory Gay. 2019. Choosing the fitness function for the job: Automated generation of test suites that detect real faults. Softw. Test., Verif. Reliab. 29, 4-5 (2019), e1701.
[88]
Federica Sarro, Alessio Petrozziello, and Mark Harman. 2016. Multi-objective software effort estimation. In Proceedings of the 38th International Conference on Software Engineering. ACM, 619–630.
[89]
Abdel Salam Sayyad, Joseph Ingram, Tim Menzies, and Hany Ammar. 2013. Scalable product line configuration: A straw to break the camel’s back. In Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering (ASE’13). IEEE, 465–474.
[90]
Seung Yeob Shin, Shiva Nejati, Mehrdad Sabetzadeh, Lionel Briand, and Frank Zimmer. 2018. Test case prioritization for acceptance testing of cyber physical systems: A multi-objective search-based approach. In Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’18).
[91]
Helge Spieker, Arnaud Gotlieb, Dusica Marijan, and Morten Mossige. 2017. Reinforcement learning for automatic test case prioritization and selection in continuous integration. In Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. 12–22.
[92]
T. Strathmann and J. Oehlerking.2015. Verifying properties of an electro-mechanical braking system. In Proceedings of the 1st and 2nd International Workshop on Applied veRification for Continuous and Hybrid Systems. 49–56.
[93]
Ye Tian, Ran Cheng, Xingyi Zhang, and Yaochu Jin. 2017. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization. IEEE Computat. Intell. Mag. 12, 4 (2017), 73–87.
[94]
Shuai Wang, Shaukat Ali, and Arnaud Gotlieb. 2013. Minimizing test suites in software product lines using weight-based genetic algorithms, In Proceedings of the Genetic and Evolutionary Computation Conference.1493–1500.
[95]
Shuai Wang, Shaukat Ali, and Arnaud Gotlieb. 2015. Cost-effective test suite minimization in product lines using search techniques. J. Syst. Softw. 103, 0 (2015), 370–391.
[96]
Shuai Wang, David Buchmann, Shaukat Ali, Arnaud Gotlieb, Dipesh Pradhan, and Marius Liaaen. 2014. Multi-objective test prioritization in software product line testing: An industrial case study. In Proceedings of the 18th International Software Product Line Conference (SPLC’14). ACM, New York, NY, 32–41. DOI:DOI:
[97]
Yinxing Xue and Yan-Fu Li. 2020. Multi-objective integer programming approaches for solving the multi-criteria test-suite minimization problem: Towards sound and complete solutions of a particular search-based software-engineering problem. IEEE Trans. Softw. Eng. Methodol. 29, 3 (2020), 1–50.
[98]
Shin Yoo and Mark Harman. 2007. Pareto efficient multi-objective test case selection. In Proceedings of the International Symposium on Software Testing and Analysis. ACM, 140–150.
[99]
Shin Yoo and Mark Harman. 2010. Using hybrid algorithm for Pareto efficient multi-objective test suite minimisation. J. Syst. Softw. 83, 4 (2010), 689–701.
[100]
S. Yoo and M. Harman. 2012. Regression testing minimization, selection and prioritization: A survey. Softw. Test. Verif. Reliab. 22, 2 (Mar.2012), 67–120.
[101]
Shin Yoo, Mark Harman, and Shmuel Ur. 2011. Highly scalable multi objective test suite minimisation using graphics cards. In Proceedings of the International Symposium on Search-based Software Engineering. Springer, 219–236.
[102]
Tingting Yu and Ting Wang. 2018. A study of regression test selection in continuous integration environments. In Proceedings of the IEEE 29th International Symposium on Software Reliability Engineering (ISSRE’18). IEEE, 135–143.
[103]
Yuecai Zhu, Emad Shihab, and Peter C. Rigby. 2018. Test re-prioritization in continuous testing environments. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME’18). IEEE, 69–79.
[104]
Eckart Zitzler and Simon Künzli. 2004. Indicator-based selection in multiobjective search. In Proceedings of the International Conference on Parallel Problem Solving from Nature. Springer, 832–842.
[105]
Eckart Zitzler, Marco Laumanns, Lothar Thiele, et al. 2001. SPEA2: Improving the strength Pareto evolutionary algorithm. In Eurogen, Vol. 3242. Eidgenössische Technische Hochschule Zürich (ETH), Institut für Technische, 95–100.

Cited By

View all
  • (2024)Hybrid whale optimized crow search algorithm and multi-SVM classifier for effective system level test case selectionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23270046:2(4191-4207)Online publication date: 14-Feb-2024
  • (2024)Reinforcement Learning Informed Evolutionary Search for Autonomous Systems TestingACM Transactions on Software Engineering and Methodology10.1145/368046833:8(1-45)Online publication date: 27-Jul-2024
  • (2024)A Review of the Applications of Heuristic Algorithms in Test Case Generation Problem2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS-C)10.1109/QRS-C63300.2024.00114(856-865)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 32, Issue 1
January 2023
954 pages
ISSN:1049-331X
EISSN:1557-7392
DOI:10.1145/3572890
  • Editor:
  • Mauro Pezzè
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 February 2023
Online AM: 11 May 2022
Accepted: 14 April 2022
Revised: 21 December 2021
Received: 10 May 2021
Published in TOSEM Volume 32, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Test case selection
  2. search-based software testing
  3. regression testing

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • Department of Education, Universities and Research of the Basque Country

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)268
  • Downloads (Last 6 weeks)21
Reflects downloads up to 22 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Hybrid whale optimized crow search algorithm and multi-SVM classifier for effective system level test case selectionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23270046:2(4191-4207)Online publication date: 14-Feb-2024
  • (2024)Reinforcement Learning Informed Evolutionary Search for Autonomous Systems TestingACM Transactions on Software Engineering and Methodology10.1145/368046833:8(1-45)Online publication date: 27-Jul-2024
  • (2024)A Review of the Applications of Heuristic Algorithms in Test Case Generation Problem2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS-C)10.1109/QRS-C63300.2024.00114(856-865)Online publication date: 1-Jul-2024
  • (2024)A Detection-Based Multi-Objective Test Case Selection Algorithm to Improve Time and Efficiency in Regression TestingIEEE Access10.1109/ACCESS.2024.343567812(114974-114994)Online publication date: 2024
  • (2024)ESSENTInformation Sciences: an International Journal10.1016/j.ins.2023.119915656:COnline publication date: 4-Mar-2024
  • (2024)E2E test execution optimization for web application based on state reuseJournal of Software: Evolution and Process10.1002/smr.271437:1Online publication date: Sep-2024
  • (2023)Applying and Extending the Delta Debugging Algorithm for Elevator Dispatching Algorithms (Experience Paper)Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3597926.3598117(1055-1067)Online publication date: 12-Jul-2023
  • (2023)Generic and industrial scale many-criteria regression test selectionJournal of Systems and Software10.1016/j.jss.2023.111802205:COnline publication date: 1-Nov-2023
  • (2023)A microservice-based framework for multi-level testing of cyber-physical systemsSoftware Quality Journal10.1007/s11219-023-09639-z32:1(193-223)Online publication date: 31-May-2023
  • (2023)A Novel Mutation Operator for Search-Based Test Case SelectionSearch-Based Software Engineering10.1007/978-3-031-48796-5_6(84-98)Online publication date: 8-Dec-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media