Model-Based Test Case Prioritization Using an Alternating Variable Method for Regression Testing of a UML-Based Model
Abstract
:1. Introduction
- RQ1.
- What mutation operators should be used for model-based mutation testing?
- RQ2.
- Which TCP technique is optimal for fault detection? Does this result differ from the code-based result?
- RQ3.
- Is the optimal search algorithm for model-based TCP the same as the code-based algorithm? Does this work for real industrial cases?
2. Background
2.1. The Model-Based Development Approach
2.2. Test Case Optimization
2.2.1. Test Case minimization
2.2.2. Test Case Selection
2.2.3. Test Case Prioritization
2.3. Mutation Testing
2.4. Search Techniques
3. Model-Based Mutation Testing
3.1. Model-Based Test Data Generation
3.2. State Chart-Based Mutation Operators
3.3. Test Case Prioritization
- : test case revealing the first fault.
- : test case revealing the second fault.
- ⋯
- : test case revealing the ith fault.
- : number of test cases
- m: number of faults
- : cost of the ith test case
- : severity of the jth fault
- : test case order of test suite T revealing the ith fault
3.4. The Alternating Variable Method (AVM)
Algorithm 1 The search-based test case prioritization using the alternating variable method (AVM) |
1: while termination criterion is not matched do |
2: =AVMSearch(S,n) |
3: if S is not improved in last 3 loops then |
4: =exploratorySearch(S) |
5: end if |
6: if fitness() > fitness(S) then |
7: =AVMSearch() |
8: if fitness() > fitness() then |
9: |
10: else |
11: |
12: end if |
13: end if |
14: |
15: end while |
4. The Triangle Classification Example
- Random: After generating a test suite to kill mutants generated by mutation operators, the suite is subjected to random prioritization. If the algorithm runs only once, the differences between this and other algorithms is large. Thus, the number of random permutations is identical to the number of test cases. This determines whether the randomly generated tests allow fitness evaluation. The computational complexity is ;
- Statement total (StTotal): Statement coverage is one of code-based coverage to confirm whether all statements are covered or not. For model coverage, this prioritization is performed using an objective function to cover all possible transitions of the state diagram; the coverage is 100 percent; The computational complexity is .
- Statement additional (StAddtl): This is similar to the above method but, after selecting a reference test case, prioritization (by a greedy algorithm) selects another case that additionally covers another transition. The complexity is ;
- Fault exposure probability total (FepTotal): Test case execution is prioritized in the order of fault detection probability. The first-selected test suite maximally increases the total probability when the fault is exposed; the numbers of exposed faults are always maximized. The computational complexity is ;
- FEP additional (FepAddtl): This is similar to the above method, but basic prioritization employs a greedy algorithm. The fault exposure probability always increases when the next test case is added. The computational complexity is ;
- AVM: A permutation is first prioritized using a fitness function, and the prioritization method, then locally optimized. As only local optimization is possible, an exploratory method is added to find global solutions even when local optimization using the AVM fails after a defined number of trials. The computational complexity is .
5. Empirical Studies: Industrial Cases
5.1. Power-Window Switch Module
5.2. Body Control Module
5.3. Passive Entry Passive Start System
5.4. Results
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Ali, S.; Zohaib Iqbal, M.; Arcuri, A.; Briand, L.C. Generating Test Data from OCL Constraints with Search Techniques. IEEE Trans. Softw. Eng. 2013, 39, 1376–1402. [Google Scholar] [CrossRef]
- Shin, K.W.; Lim, D.J. Model-based automatic test case generation for automotive embedded software testing. Int. J. Automot. Technol. 2018, 19, 107–119. [Google Scholar] [CrossRef]
- Shin, K.W.; Kim, S.; Park, S.; Lim, D.J. Automated Test Case Generation for Automotive Embedded Software Testing Using XMI-Based UML Model Transformations. SAE Tech. Pap. 2014, 1. [Google Scholar] [CrossRef]
- Shin, K.W.; Lee, J.H.; Kim, S.S.; Park, J.U.; Lim, D.J. Automatic Test Case Generation Based on the UML Model of the Automotive-Embedded Software using a Custom Parser and SMT Solver. In Proceedings of the JSAE 2016 Annual Congress, Yokohama, Japan, 25–27 May 2016; pp. 1198–1202. [Google Scholar]
- Aichernig, B.K.; Auer, J.; Jöbstl, E.; Korošec, R.; Krenn, W.; Schlick, R.; Schmidt, B.V. Model-Based Mutation Testing of an Industrial Measurement Device. In Tests and Proofs; Seidl, M., Tillmann, N., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 1–19. [Google Scholar] [CrossRef]
- Utting, M.; Pretschner, A.; Legeard, B. A Taxonomy of Model-Based Testing Approaches. Softw. Test. Verif. Reliab. 2012, 22, 297–312. [Google Scholar] [CrossRef] [Green Version]
- Samuel, P.; Mall, R.; Bothra, A.K. Automatic test case generation using unified modeling language (UML) state diagrams. IET Softw. 2008, 2, 79–93. [Google Scholar] [CrossRef]
- Gulia, P.; Chillar, R. A new approach to generate and optimize test cases for UML state diagram using genetic algorithm. ACM Sigsoft Softw. Eng. Notes 2012, 1–5. [Google Scholar] [CrossRef]
- Choi, Y.M.; Lim, D.J. Model-Based Test Suite Generation Using Mutation Analysis for Fault Localization. Appl. Sci. 2019, 9, 3492. [Google Scholar] [CrossRef] [Green Version]
- Heumann, J. Generating Test Cases From Use Cases. Technical Report, Rational Software. 2001. Available online: https://www.ibm.com/developerworks/rational/library/content/RationalEdge/jun01/GeneratingTestCasesFromUseCasesJune01.pdf (accessed on 15 June 2001).
- Eclipse Papyrus. Available online: https://www.eclipse.org/papyrus/ (accessed on 15 June 2020).
- Enterprise Architect 15.2. Available online: https://sparxsystems.com/products/ea/ (accessed on 27 August 2020).
- Vörös, A.; Micskei, Z.; Konnerth, R.; Horváth, B.; Semerath, O.; Varro, D. On Open Source Tools for Behavioral Modeling and Analysis with fUML and Alf. In Proceedings of the 1st Workshop on Open Source Software for Model Driven Engineering, Valencia, Spain, 28 September 2014. [Google Scholar]
- Ciccozzi, F.; Malavolta, I.; Selic, B. Execution of UML models: A systematic review of research and practice. Softw. Syst. Model. 2018, 18, 2313–2360. [Google Scholar] [CrossRef] [Green Version]
- Hyunsook, D.; Rothermel, G. On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques. IEEE Trans. Softw. Eng. 2006, 32, 733–752. [Google Scholar] [CrossRef] [Green Version]
- Aichernig, B.; Jöbstl, E.; Tiran, S. Model-based mutation testing via symbolic refinement checking. Sci. Comput. Program. 2015, 97. [Google Scholar] [CrossRef]
- Zhan, Y.; Clark, J.A. Search-Based Mutation Testing for Simulink Models. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, GECCO ’05, Washington, DC, USA, 25–29 June 2005; ACM: New York, NY, USA, 2005; pp. 1061–1068. [Google Scholar] [CrossRef] [Green Version]
- Kim, M.; Kim, Y.; Jang, Y. Industrial Application of Concolic Testing on Embedded Software: Case Studies. In Proceedings of the IEEE 5th International Conference on Software Testing, Verification and Validation, ICST 2012, Montreal, QC, Canada, 17–21 April 2012. [Google Scholar] [CrossRef]
- Kim, Y.; Kim, M.; Kim, Y.; Jung, U.J. Comparison of Search Strategies of KLEE Concolic Testing Tool. J. KIISE Comput. Pract. Lett. 2012, 18, 321–325. [Google Scholar]
- Elbaum, S.; Malishevsky, A.G.; Rothermel, G. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng. 2002, 28, 159–182. [Google Scholar] [CrossRef] [Green Version]
- Elbaum, S.; Malishevsky, A.; Rothermel, G. Incorporating varying test costs and fault severities into test case prioritization. In Proceedings of the 23rd International Conference on Software Engineering, ICSE 2001, Toronto, ON, Canada, 12–19 May 2001; pp. 329–338. [Google Scholar] [CrossRef]
- Elbaum, S.; Rothermel, G.; Kanduri, S.; Malishevsky, A.G. Selecting a Cost-Effective Test Case Prioritization Technique. Softw. Qual. J. 2004, 12, 185–210. [Google Scholar] [CrossRef] [Green Version]
- Luo, Q.; Moran, K.; Zhang, L.; Poshyvanyk, D. How Do Static and Dynamic Test Case Prioritization Techniques Perform on Modern Software Systems? An Extensive Study on GitHub Projects. IEEE Trans. Softw. Eng. 2019, 45, 1054–1080. [Google Scholar] [CrossRef] [Green Version]
- Pradhan, D.; Wang, S.; Ali, S.; Yue, T. Search-Based Cost-Effective Test Case Selection within a Time Budget: An Empirical Study. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO’16, Denver, CO, USA, 20–24 July 2016; ACM: New York, NY, USA, 2016; pp. 1085–1092. [Google Scholar] [CrossRef]
- Arrieta, A.; Wang, S.; Sagardui, G.; Etxeberria, L. Test Case Prioritization of Configurable Cyber-Physical Systems with Weight-Based Search Algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO’16, Denver, CO, USA, 20–24 July 2016; ACM: New York, NY, USA, 2016; pp. 1053–1060. [Google Scholar] [CrossRef]
- Aichernig, B.K.; Tappler, M. Efficient Active Automata Learning via Mutation Testing. J. Autom. Reason. 2019, 63, 1103–1134. [Google Scholar] [CrossRef] [Green Version]
- Krenn, W.; Schlick, R.; Tiran, S.; Aichernig, B.; Jobstl, E.; Brandl, H. MoMut: UML Model-Based Mutation Testing for UML. In Proceedings of the 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), Graz, Austria, 13–17 April 2015; pp. 1–8. [Google Scholar] [CrossRef]
- Belli, F.; Budnik, C.J.; Hollmann, A.; Tuglular, T.; Wong, W.E. Model-based mutation testing—Approach and case studies. Sci. Comput. Program. 2016, 120, 25–48. [Google Scholar] [CrossRef] [Green Version]
- Eghbali, S.; Tahvildari, L. Test Case Prioritization Using Lexicographical Ordering. IEEE Trans. Softw. Eng. 2016, 42, 1178–1195. [Google Scholar] [CrossRef]
- XML Metadata Interchange (XMI) Specification. Available online: https://www.omg.org/spec/XMI/2.0/PDF (accessed on 15 May 2003).
- McMinn, P.; Kapfhammer, G. AVMf: An Open-Source Framework and Implementation of the Alternating Variable Method. In Proceedings of the International Symposium on Search Based Software Engineering: 8th, Raleigh, NC, USA, 8–10 October 2016; pp. 259–266. [Google Scholar] [CrossRef] [Green Version]
- Kempka, J.; McMinn, P.; Sudholt, D. Design and analysis of different alternating variable searches for search-based software testing. Theor. Comput. Sci. 2015, 605, 1–20. [Google Scholar] [CrossRef] [Green Version]
- Catal, C.; Mishra, D. Test case prioritization: A systematic mapping study. Softw. Qual. J. 2013, 21. [Google Scholar] [CrossRef]
ID | Mutation Operator | Mutation Method | Formula |
---|---|---|---|
MO1 | Replace call trigger | Replaces a trigger of a transition with another trigger. | |
MO2 | Replace guard | Replaces an operand, a logical operator, or a relational operator on a guard. | |
MO3 | Replace guard to false | Ensures that a guard is false. | |
MO4 | Replace guard to true | Ensures that a guard is true. | |
MO5 | Replace action | Replaces an action on a transition or a state with another action. | |
MO6 | Replace incoming | Replaces an incoming transition to another state, excluding the default transition. | |
MO7 | Replace outgoing | Replaces an outgoing transition to another state, including the default transition. | |
MO8 | Remove state | Removes a state and its related transitions on a state machine. | |
MO9 | Remove transition | Removes a transition on a state machine. | |
MO10 | Remove call trigger | Removes a call trigger on a transition. | |
MO11 | Remove guard | Removes a part of a guard condition. | |
MO12 | Remove action | Removes an action on a transition or a state. |
Category | Sum of Squares | df | Mean Squares | F | p-Value |
---|---|---|---|---|---|
Between-groups | 0.553 | 5 | 0.111 | 4635.015 | <0.005 |
Within-groups | 0.014 | 594 | 0.000 | - | - |
Total | 0.567 | 599 | - | - | - |
(I) Method | (J) Method | Average Difference (I − J) | p |
---|---|---|---|
Random | StTotal | 0.003824646490 | <0.05 |
StAddtl | 0.004647020250 | <0.05 | |
FepTotal | −0.051540050500 | <0.05 | |
FepAddtl | −0.060530909030 | <0.05 | |
AVM | −0.060551111040 | <0.05 | |
StTotal | Random | −0.003824646490 | <0.05 |
StAddtl | 0.000822373760 | ≥0.05 | |
FepTotal | −0.055364696990 | <0.05 | |
FepAddtl | −0.064355555520 | <0.05 | |
AVM | −0.064375757530 | <0.05 | |
StAddtl | Random | −0.004647020250 | <0.05 |
StTotal | −0.000822373760 | ≥0.05 | |
FepTotal | −0.056187070750 | <0.05 | |
FepAddtl | −0.065177929280 | <0.05 | |
AVM | −0.065198131290 | <0.05 | |
FepTotal | Random | 0.051540050500 | <0.05 |
StTotal | 0.055364696990 | <0.05 | |
StAddtl | 0.056187070750 | <0.05 | |
FepAddtl | −0.008990858530 | <0.05 | |
AVM | −0.009011060540 | <0.05 | |
FepAddtl | Random | 0.060530909030 | <0.05 |
StTotal | 0.064355555520 | <0.05 | |
StAddtl | 0.065177929280 | <0.05 | |
FepTotal | 0.008990858530 | <0.05 | |
AVM | −0.000020202010 | ≥0.05 | |
AVM | Random | 0.060551111040 | <0.05 |
StTotal | 0.064375757530 | <0.05 | |
StAddtl | 0.065198131290 | <0.05 | |
FepTotal | 0.009011060540 | <0.05 | |
FepAddtl | 0.000020202010 | ≥0.05 |
Module | Number of States | Number of Transitions | Number of State Machines | Number of Test Cases | p-Value |
---|---|---|---|---|---|
Courtesy Lamp | 12 | 26 | 4 | 1135 | <0.005 |
OS Mirror | 109 | 349 | 32 | 1702 | <0.005 |
Power window | 78 | 126 | 26 | 1780 | <0.005 |
Puddle lamp | 14 | 26 | 5 | 887 | <0.005 |
Interface | 26 | 39 | 13 | 1719 | <0.005 |
Heater | 4 | 6 | 2 | 1294 | <0.005 |
IMS | 27 | 38 | 4 | 1295 | <0.005 |
Total | 270 | 610 | 86 | 9812 |
(I) Method | (J) Method | Average difference (I − J) | p-Value |
---|---|---|---|
Random | StTotal | 0.004165355370 | <0.005 |
StAddtl | 0.003424223600 | ≥0.005 | |
FepTotal | −0.069128709500 | <0.005 | |
FepAddtl | −0.082282539710 | <0.005 | |
AVM | −0.082333333370 | <0.005 | |
StTotal | Random | −0.004165355370 | <0.005 |
StAddtl | −0.000741131770 | ≥0.005 | |
FepTotal | −0.073294064870 | <0.005 | |
FepAddtl | −0.086447895080 | <0.005 | |
AVM | −0.086498688740 | <0.005 | |
StAddtl | Random | −0.003424223600 | ≥0.005 |
StTotal | 0.000741131770 | ≥0.005 | |
FepTotal | −0.072552933100 | <0.005 | |
FepAddtl | −0.085706763310 | <0.005 | |
AVM | −0.085757556970 | <0.005 | |
FepTotal | Random | 0.069128709500 | <0.05 |
StTotal | 0.073294064870 | <0.005 | |
StAddtl | 0.072552933100 | <0.005 | |
FepAddtl | −0.013153830210 | <0.005 | |
AVM | −0.013204623870 | <0.005 | |
FepAddtl | Random | 0.082282539710 | <0.05 |
StTotal | 0.086447895080 | <0.005 | |
StAddtl | 0.085706763310 | <0.005 | |
FepTotal | 0.013153830210 | <0.005 | |
AVM | −0.000050793660 | ≥0.005 | |
AVM | Random | 0.082333333370 | <0.05 |
StTotal | 0.086498688740 | <0.005 | |
StAddtl | 0.085757556970 | <0.005 | |
FepTotal | 0.013204623870 | <0.005 | |
FepAddtl | 0.000050793660 | ≥0.005 |
Module | Number of States | Number of Transitions | Number of Test Cases | p-Value |
---|---|---|---|---|
Tailgate | 8 | 16 | 1857 | <0.005 |
OS Mirror | 10 | 73 | 1583 | <0.005 |
Defroster | 7 | 7 | 1597 | <0.005 |
Driving | 2 | 3 | 1329 | <0.005 |
Interface | 5 | 11 | 1910 | <0.005 |
Interior lamp | 2 | 5 | 1765 | <0.005 |
Power window | 8 | 9 | 1785 | <0.005 |
Remote key entry | 1 | 4 | 1253 | <0.005 |
Flasher | 3 | 37 | 1017 | <0.005 |
Warning | 5 | 23 | 1945 | <0.005 |
Wipers/washers | 2 | 5 | 1657 | <0.005 |
Exterior lamp | 2 | 3 | 1915 | <0.005 |
Mirror | 3 | 7 | 1409 | <0.005 |
Door locks | 18 | 48 | 1528 | <0.005 |
Total | 76 | 251 | 22,550 | - |
Module | Number of State Diagrams | Number of Test Cases | F | p-Value |
---|---|---|---|---|
Unit | 81 | 1387 | 4389.207 | <0.005 |
Access | 13 | 1993 | 1631.736 | <0.005 |
RKE | 11 | 1799 | 4267.017 | <0.005 |
Start | 44 | 1385 | 3139.672 | <0.005 |
Warning | 49 | 1755 | 4833.233 | <0.005 |
Total | 198 | 8319 | - | - |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shin, K.-W.; Lim, D.-J. Model-Based Test Case Prioritization Using an Alternating Variable Method for Regression Testing of a UML-Based Model. Appl. Sci. 2020, 10, 7537. https://doi.org/10.3390/app10217537
Shin K-W, Lim D-J. Model-Based Test Case Prioritization Using an Alternating Variable Method for Regression Testing of a UML-Based Model. Applied Sciences. 2020; 10(21):7537. https://doi.org/10.3390/app10217537
Chicago/Turabian StyleShin, Ki-Wook, and Dong-Jin Lim. 2020. "Model-Based Test Case Prioritization Using an Alternating Variable Method for Regression Testing of a UML-Based Model" Applied Sciences 10, no. 21: 7537. https://doi.org/10.3390/app10217537