Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2786805.2786818acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Effective test suites for mixed discrete-continuous stateflow controllers

Published: 30 August 2015 Publication History

Abstract

Modeling mixed discrete-continuous controllers using Stateflow is common practice and has a long tradition in the embedded software system industry. Testing Stateflow models is complicated by expensive and manual test oracles that are not amenable to full automation due to the complex continuous behaviors of such models. In this paper, we reduce the cost of manual test oracles by providing test case selection algorithms that help engineers develop small test suites with high fault revealing power for Stateflow models. We present six test selection algorithms for discrete-continuous Stateflows: An adaptive random test selection algorithm that diversifies test inputs, two white-box coverage-based algorithms, a black-box algorithm that diversifies test outputs, and two search-based black-box algorithms that aim to maximize the likelihood of presence of continuous output failure patterns. We evaluate and compare our test selection algorithms, and find that our three output-based algorithms consistently outperform the coverage- and input-based algorithms in revealing faults in discrete-continuous Stateflow models. Further, we show that our output-based algorithms are complementary as the two search-based algorithms perform best in revealing specific failures with small test suites, while the output diversity algorithm is able to identify different failure types better than other algorithms when test suites are above a certain size.

References

[1]
N. Alshahwan and M. Harman. Augmenting test suites effectiveness by increasing output diversity. In Proceedings of the 34th International Conference on Software Engineering, pages 1345–1348. IEEE Press, 2012.
[2]
N. Alshahwan and M. Harman. Coverage and fault detection of the output-uniqueness test selection criteria. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, pages 181–192. ACM, 2014.
[3]
A. Arcuri and L. Briand. Adaptive random testing: An illusion of effectiveness? In Proceedings of the 2011 International Symposium on Software Testing and Analysis, pages 265–275. ACM, 2011.
[4]
X. Bai, K. Hou, H. Lu, Y. Zhang, L. Hu, and H. Ye. Semantic-based test oracles. In Computer Software and Applications Conference (COMPSAC), 2011 IEEE 35th Annual, pages 640–649. IEEE, 2011.
[5]
E. T. Barr, M. Harman, P. McMinn, M. Shahbaz, and S. Yoo. The oracle problem in software testing: A survey. IEEE transactions on software engineering, 2015.
[6]
R. Binder. Testing object-oriented systems: models, patterns, and tools. Addison-Wesley Professional, 2000.
[7]
M. Buchler, J. Oudinet, and A. Pretschner. Semi-automatic security testing of web applications from a secure model. In Software Security and Reliability (SERE), 2012 IEEE Sixth International Conference on, pages 253–262. IEEE, 2012.
[8]
J. A. Capon. Elementary Statistics for the Social Sciences: Study Guide. Wadsworth Publishing Company, 1991.
[9]
T. Y. Chen, F.-C. Kuo, R. G. Merkel, and T. Tse. Adaptive random testing: The art of test case diversity. Journal of Systems and Software, 83(1):60–66, 2010.
[10]
E. M. Clarke, Jr., O. Grumberg, and D. A. Peled. Model Checking. MIT Press, 1999.
[11]
R. Cleaveland, S. A. Smolka, and S. T. Sims. An instrumentation-based approach to controller model validation. In Model-Driven Development of Reliable Automotive Services, pages 84–97. Springer, 2008.
[12]
J. Cohen. Statistical power analysis for the behavioral sciences (rev). Lawrence Erlbaum Associates, Inc, 1977.
[13]
R. Colgren. Basic MATLAB, Simulink and Stateflow. AIAA (American Institute of Aeronautics and Astronautics), 2006.
[14]
D. Coppit and J. M. Haddox-Schatz. On the use of specification-based assertions as test oracles. In Software Engineering Workshop, 2005. 29th Annual IEEE/NASA, pages 305–314. IEEE, 2005.
[15]
M. Gligoric, S. Negara, O. Legunsen, and D. Marinov. An empirical evaluation and comparison of manual and automated test selection. In Proceedings of the 29th ACM/IEEE international conference on Automated software engineering, pages 361–372. ACM, 2014.
[16]
M. P. Heimdahl, L. Duan, A. Murugesan, and S. Rayadurgam. Modeling and requirements on the physical side of cyber-physical systems. In 2nd International Workshop on the Twin Peaks of Requirements and Architecture (TwinPeaks), 2013, pages 1–7. IEEE, 2013.
[17]
H. Hemmati, A. Arcuri, and L. Briand. Empirical investigation of the effects of test suite properties on similarity-based test case selection. In Software Testing, Verification and Validation (ICST), 2011 IEEE Fourth International Conference on, pages 327–336. IEEE, 2011.
[18]
T. Henzinger. The theory of hybrid automata. In LICS, pages 278–292, 1996.
[19]
T. Henzinger and J. Sifakis. The embedded systems design challenge. In FM, pages 1–15, 2006.
[20]
L. Inozemtseva and R. Holmes. Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th International Conference on Software Engineering, pages 435–445. ACM, 2014.
[21]
S. Luke. Essentials of metaheuristics, volume 113. Lulu Raleigh, 2009.
[22]
E. Martin and T. Xie. A fault model and mutation testing of access control policies. In Proceedings of the 16th international conference on World Wide Web, pages 667–676. ACM, 2007.
[23]
R. Matinnejad. The modified version of GCS Stateflow. https://drive.google.com/file/d/ 0B104wPnuxJVhSU44WFlTVE84bmM/view?usp=sharing. {Online; accessed 10-March-2015}.
[24]
R. Matinnejad, S. Nejati, L. Briand, T. Bruckmann, and C. Poull. Automated model-in-the-loop testing of continuous controllers using search. In Search Based Software Engineering, pages 141–157. Springer, 2013.
[25]
R. Matinnejad, S. Nejati, L. Briand, T. Bruckmann, and C. Poull. Search-based automated testing of continuous controllers: Framework, tool support, and case studies. Information and Software Technology, 57:705–722, 2015.
[26]
P. McMinn, M. Stevenson, and M. Harman. Reducing qualitative human oracle costs associated with automatically generated test data. In Proceedings of the First International Workshop on Software Test Output Validation, pages 1–4. ACM, 2010.
[27]
S. Mohalik, A. A. Gadkari, A. Yeolekar, K. Shashidhar, and S. Ramesh. Automatic test case generation from simulink/stateflow models using model checking. Software Testing, Verification and Reliability, 24(2):155–180, 2014.
[28]
A. S. Namin and J. H. Andrews. The influence of size and coverage on test suite effectiveness. In Proceedings of the eighteenth international symposium on Software testing and analysis, pages 57–68. ACM, 2009.
[29]
P. A. Nardi. On test oracles for Simulink-like models. PhD thesis, Universidade de São Paulo, 2014.
[30]
P. Peranandam, S. Raviram, M. Satpathy, A. Yeolekar, A. Gadkari, and S. Ramesh. An integrated test generation tool for enhanced coverage of simulink/stateflow models. In Design, Automation & Test in Europe Conference & Exhibition (DATE), 2012, pages 308–& Exhibition (DATE), 2012, pages 308––311. IEEE, 2013.
[31]
J. Philipps, G. Hahn, A. Pretschner, and T. Stauner. Tests for mixed discrete-continuous reactive systems. In 14th IEEE International Workshop on Rapid Systems Prototyping, Proceedings., pages 78–84. IEEE, 2003.
[32]
A. Pretschner, D. Holling, R. Eschbach, and M. Gemmar. A generic fault model for quality assurance. In Model-Driven Engineering Languages and Systems, pages 87–103. Springer, 2013.
[33]
Reactive Systems Inc. Reactis Tester. http://www.reactive-systems.com/ simulink-testing-validation.html, 2010. {Online; accessed 25-Nov-2013}.
[34]
Reactive Systems Inc. Reactis Validator. http://www.reactive-systems.com/ simulink-testing-validation.html, 2010. {Online; accessed 25-Nov-2013}.
[35]
D. J. Richardson, S. L. Aha, and T. O. O’malley. Specification-based test oracles for reactive systems. In Proceedings of the 14th international conference on Software engineering, pages 105–118. ACM, 1992.
[36]
M. Satpathy, A. Yeolekar, P. Peranandam, and S. Ramesh. Efficient coverage of parallel and hierarchical stateflow models for test case generation. Software Testing, Verification and Reliability, 22(7):457–479, 2012.
[37]
M. Satpathy, A. Yeolekar, and S. Ramesh. Randomized directed testing (redirect) for simulink/stateflow models. In Proceedings of the 8th ACM international conference on Embedded software, pages 217–226. ACM, 2008.
[38]
M. Staats, M. W. Whalen, and M. P. E. Heimdahl. Programs, tests, and oracles: the foundations of testing revisited. In Software Engineering (ICSE), 2011 33rd International Conference on, pages 391–400. IEEE, 2011.
[39]
T. Stauner. Properties of hybrid systems-a computer science perspective. Formal Methods in System Design, 24(3):223–259, 2004.
[40]
The MathWorks Inc. Common Modeling Errors Stateflow Can Detect. http://nl.mathworks.com/help/stateflow/ug/ common-modeling-errors-the-debugger-can-detect.html. {Online; accessed 23-Fev-2015}.
[41]
The MathWorks Inc. Designing a Guidance System in MATLAB/Simulink. http://nl.mathworks.com/help/simulink/examples/ designing-a-guidance-system-in-matlab-and-simulink. html. {Online; accessed 23-Fev-2015}.
[42]
The MathWorks Inc. Simulink Design Verifier. http://nl.mathworks.com/products/sldesignverifier/ ?refresh=true. {Online; accessed 6-May-2015}.
[43]
The MathWorks Inc. Stateflow. http://www.mathworks.nl/products/stateflow. {Online; accessed 23-Fev-2015}.
[44]
The MathWorks Inc. Stateflow Model Examples. http://nl.mathworks.com/products/stateflow/ model-examples.html. {Online; accessed 23-Fev-2015}.
[45]
A. Tiwari. Formal semantics and analysis methods for simulink stateflow models. Unpublished report, SRI International, 2002.
[46]
A. K. Tyagi. MATLAB and SIMULINK for Engineers. Oxford University Press, 2012.
[47]
W. Visser, C. S. Pasareanu, and S. Khurshid. Test input generation with java pathfinder. ACM SIGSOFT Software Engineering Notes, 29(4):97–107, 2004.
[48]
Wikipedia. Open-loop controller. http://en.wikipedia.org/wiki/Open-loop_controller. {Online; last accessed 10-Nov-2014}.
[49]
Wikipedia. Voltage Spike. http://en.wikipedia.org/wiki/Voltage_spike. {Online; accessed 23-Fev-2015}.
[50]
A. Windisch. Search-based test data generation from stateflow statecharts. In Proceedings of the 12th annual conference on Genetic and evolutionary computation, pages 1349–1356. ACM, 2010.
[51]
J. Zander, I. Schieferdecker, and P. J. Mosterman. Model-based testing for embedded systems. CRC Press, 2012.

Cited By

View all
  • (2024)DiPri: Distance-based Seed Prioritization for Greybox FuzzingACM Transactions on Software Engineering and Methodology10.1145/3654440Online publication date: 26-Mar-2024
  • (2023)Harnessing Large Language Models for Simulink Toolchain Testing and Developing Diverse Open-Source Corpora of Simulink Models for Metric and Evolution AnalysisProceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3597926.3605233(1541-1545)Online publication date: 12-Jul-2023
  • (2023)Some Seeds Are Strong: Seeding Strategies for Search-based Test Case SelectionACM Transactions on Software Engineering and Methodology10.1145/353218232:1(1-47)Online publication date: 13-Feb-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE 2015: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering
August 2015
1068 pages
ISBN:9781450336758
DOI:10.1145/2786805
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 August 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Stateflow testing
  2. failure-based testing
  3. mixed discrete-continuous behaviors
  4. output diversity
  5. structural coverage

Qualifiers

  • Research-article

Funding Sources

Conference

ESEC/FSE'15
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)3
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)DiPri: Distance-based Seed Prioritization for Greybox FuzzingACM Transactions on Software Engineering and Methodology10.1145/3654440Online publication date: 26-Mar-2024
  • (2023)Harnessing Large Language Models for Simulink Toolchain Testing and Developing Diverse Open-Source Corpora of Simulink Models for Metric and Evolution AnalysisProceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3597926.3605233(1541-1545)Online publication date: 12-Jul-2023
  • (2023)Some Seeds Are Strong: Seeding Strategies for Search-based Test Case SelectionACM Transactions on Software Engineering and Methodology10.1145/353218232:1(1-47)Online publication date: 13-Feb-2023
  • (2023)What Not to Test (For Cyber-Physical Systems)IEEE Transactions on Software Engineering10.1109/TSE.2023.327230949:7(3811-3826)Online publication date: Jul-2023
  • (2023)Replicability Study: Corpora For Understanding Simulink Models & Projects2023 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)10.1109/ESEM56168.2023.10304867(1-12)Online publication date: 26-Oct-2023
  • (2022)Learning self-adaptations for IoT networksProceedings of the 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems10.1145/3524844.3528053(13-24)Online publication date: 18-May-2022
  • (2022)Is the revisited hypervolume an appropriate quality indicator to evaluate multi-objective test case selection algorithms?Proceedings of the Genetic and Evolutionary Computation Conference10.1145/3512290.3528717(1317-1326)Online publication date: 8-Jul-2022
  • (2022)Machine learning‐based test oracles for performance testing of cyber‐physical systems: An industrial case study on elevators dispatching algorithmsJournal of Software: Evolution and Process10.1002/smr.246534:11Online publication date: 25-May-2022
  • (2021)A Survey on Adaptive Random TestingIEEE Transactions on Software Engineering10.1109/TSE.2019.294292147:10(2052-2083)Online publication date: 1-Oct-2021
  • (2021)Dynamic test prioritization of product lines: An application on configurable simulation modelsSoftware Quality Journal10.1007/s11219-021-09571-0Online publication date: 20-Oct-2021
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media