Abstract
Conformance testing in model-based development refers to the testing activity that verifies whether the code generated (manually or automatically) from the model is behaviorally equivalent to the model. Presently the adequacy of conformance testing is inferred by measuring structural coverage achieved over the model. We hypothesize that adequacy metrics for conformance testing should consider structural coverage over the requirements either in place of or in addition to structural coverage over the model. Measuring structural coverage over the requirements gives a notion of how well the conformance tests exercise the required behavior of the system.
We conducted an experiment to investigate the hypothesis stating structural coverage over formal requirements is more effective than structural coverage over the model as an adequacy measure for conformance testing. We found that the hypothesis was rejected at 5% statistical significance on three of the four case examples in our experiment. Nevertheless, we found that the tests providing requirements coverage found several faults that remained undetected by tests providing model coverage. We thus formed a second hypothesis stating that complementing model coverage with requirements coverage will prove more effective as an adequacy measure than solely using model coverage for conformance testing. In our experiment, we found test suites providing both requirements coverage and model coverage to be more effective at finding faults than test suites providing model coverage alone, at 5% statistical significance. Based on our results, we believe existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.
This work has been partially supported by NASA Ames Research Center Cooperative Agreement NNA06CB21A, NASA IV&V Facility Contract NNG-05CB16C, and the L-3 Titan Group.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Andrews, J.H., Briand, L.C., Labiche, Y.: Is Mutation an Appropriate Tool for Testing Experiments? In: Proceedings of the 27th International Conference on Software Engineering (ICSE), pp. 402–411 (2005)
Beer, I., Ben-David, S., Eisner, C., Rodeh, Y.: Efficient detection of vacuity in ACTL formulas. In: Formal Methods in System Design, pp. 141–162 (2001)
Briand, L.C., Di Penta, M., Labiche, Y.: Assessing and Improving State-Based Class Testing: A Series of Experiments. IEEE Transactions on Software Engineering 30(11) (2004)
Chilenski, J.J., Miller, S.P.: Applicability of Modified Condition/Decision Coverage to Software Testing. Software Engineering Journal, 193–200 (September 1994)
Clarke, E.M., Grumberg, O., Peled, D.: Model Checking. MIT Press, Cambridge (1999)
Fisher, R.A.: The Design of Experiment. Hafner, New York (1935)
Gargantini, A., Heitmeyer, C.: Using model checking to generate tests from requirements specifications. Software Engineering Notes 24(6), 146–162 (1999)
Harel, D., Marelly, R.: Come Let’s Play: Scenario-Based Programming Using LSC’s and the Play-Engine. Springer, New York (2003)
Hayhurst, K.J., Veerhusen, D.S., Rierson, L.K.: A practical tutorial on modified condition/decision coverage. Technical Report TM-2001-210876, NASA (2001)
Kupferman, O., Vardi, M.Y.: Vacuity detection in temporal model checking. Journal on Software Tools for Technology Transfer 4(2) (February 2003)
Kvam, P.H., Vidakovic, B.: Nonparametric Statistics with Applications to Science and Engineering (2007)
Mathworks Inc. Simulink product web site, http://www.mathworks.com/products/simulink
The NuSMV Toolset (2005), http://nusmv.irst.itc.it/
Offutt, A.J., Pan, J.: Automatically detecting equivalent mutants and infeasible paths. Software Testing, Verification & Reliability 7(3), 165–192 (1997)
Purandare, M., Somenzi, F.: Vacuum cleaning CTL formulae. In: Proceedings of the 14th Conference on Computer Aided Design, pp. 485–499. Springer, Heidelberg (2002)
Rajan, A., Whalen, M.W., Heimdahl, M.P.E.: The Effect of Program and Model Structure on MC/DC Test Adequacy Coverage. In: Proceedings of 30th International Conference on Software Engineering (ICSE) (to appear, 2008), http://crisys.cs.umn.edu/ICSE08.pdf
Rayadurgam, S.: Automatic Test-case Generation from Formal Models of Software. PhD thesis, University of Minnesota (November 2003)
Rayadurgam, S., Heimdahl, M.P.E.: Coverage based test-case generation using model checkers. In: Proceedings of the 8th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS 2001), April 2001, pp. 83–91. IEEE Computer Society Press, Los Alamitos (2001)
Rayadurgam, S., Heimdahl, M.P.E.: Generating MC/DC adequate test sequences through model checking. In: Proceedings of the 28th Annual IEEE/NASA Software Engineering Workshop – SEW 2003, Greenbelt, Maryland (December 2003)
RTCA. DO-178B: Software Consideration. In: Airborne Systems and Equipment Certification. RTCA (1992)
Whalen, M.W., Rajan, A., Heimdahl, M.P.E.: Coverage metrics for requirements-based testing. In: Proceedings of International Symposium on Software Testing and Analysis (July 2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rajan, A., Whalen, M., Staats, M., Heimdahl, M.P.E. (2008). Requirements Coverage as an Adequacy Measure for Conformance Testing. In: Liu, S., Maibaum, T., Araki, K. (eds) Formal Methods and Software Engineering. ICFEM 2008. Lecture Notes in Computer Science, vol 5256. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88194-0_8
Download citation
DOI: https://doi.org/10.1007/978-3-540-88194-0_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-88193-3
Online ISBN: 978-3-540-88194-0
eBook Packages: Computer ScienceComputer Science (R0)