Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article
Free access

Testability, fault size and the domain-to-range ratio: An eternal triangle

Published: 01 August 2000 Publication History
  • Get Citation Alerts
  • Abstract

    A number of different concepts have been proposed that, loosely speaking, revolve around the notion of software testability. Indeed, the concept of testability itself has been interpreted in a variety of ways by the software community. One interpretation is concerned with the extent of the modifications a program component requires, in terms of its input and output variables, so that the entire behaviour of the component is observable and controllable. Another interpretation is the ease with which faults, if present in a program, can be revealed by the testing process and the propagation, infection and execution (PIE) model has been proposed as a method of estimating this. It has been suggested that this particular interpretation of testability might be linked with the metric domain-to-range ratio (DRR), i.e. the ratio of the cardinality of the set of all inputs (the domain) to the cardinality of the set of all outputs (the range). This paper reports work in progress exploring some of the connections between the concepts mentioned. In particular, a simple mathematical link is established between domain-to-range ratio and the observability and controllability aspects of testability. In addition, the PIE model is re-considered and a relationship with fault size is observed. This leads to the suggestion that it might be more straightforward to estimate PIE testability by an adaptation of traditional mutation analysis. The latter suggestion exemplifies the main goals of the work described here, namely to seek greater understanding of testability in general and, ultimately, to find easier ways of determining it.

    References

    [1]
    Bache, R. and M~llerburg, M., "Measures of testability as basis for quality assurance", Software Engineering Journal, 5(2), 86-92 (March 1990).
    [2]
    Bainbridge, J., "Defining testability metrics axiomatically", Software Testing, Verification and Reliability, 4(2), 63-80 (June 1994).
    [3]
    Bertolino, A. and Strigini, L., "On the use of testability measures for dependability assessment", IEEE Trans. on Soft. Eng., 22(2), 97-108 (Feb. 1996).
    [4]
    Boehm, B.W., Brown, J.R. and Lipow, M., "Quantitative evaluation of software quality", Proc. 2nd Int. Conf. on Software Engineering, San Francisco, U.S.A., IEEE Press, pp. 592-605 (Oct. 1976).
    [5]
    DeMillo, R.A., Guindi, D.S., McCracken, W.M., Offutt, A.J. and King, K.N., "An extended overview of the Mothra software testing environment", Proc. Second Workshop on Software Testing, Verification and Analysis, Banff, Canada, IEEE Press, pp. 142-151 (July 1988).
    [6]
    Freedman, R.S., "Testability of software components", IEEE Trans. on Soft. Eng., 17(6), 553-564 (June 1991).
    [7]
    Friedman, M.A. and Voas, J.M., Software Assessment: Reliability, Safety, Testability, Wiley, New York, U.S.A. (1995).
    [8]
    Goradia, T., "Dynamic impact analysis: a cost-effective technique to enforce error-propagation", Proc. Int. Symp. on Software Testing and Analysis (ISSTA '93), Cambridge, MA, U.S.A., ACM Press, pp. 171-181 (June 1993).
    [9]
    Hamlet, D. and Voas, J., "Faults on its sleeve: amplifying software reliability testing", Proc. Int. Symp. on Software Testing and Analysis (ISSTA '93), Cambridge, MA, U.S.A., ACM Press, pp. 89-98 (June 1993).
    [10]
    How Tai Wah, K.S., "A theoretical study of fault coupling", Software Testing, Verification and Reliability, 10(1), 3-45 (March 2000).
    [11]
    Morell, L.J., "A theory of fault-based testing", IEEE Trans. on Soft. Eng., 16(8), 844-857 (Aug. 1990).
    [12]
    Offutt, A.J. and Hayes, J.H., "A semantic model of program faults", Proc. Int. Symp. on Software Testing and Analysis (ISSTA '96), San Diego, U.S.A., ACM Press, pp. 195-200 (Jan. 1996).
    [13]
    Richardson, D.J. and Thompson, M.C., "An analysis of test data selection criteria using the RELAY model of fault detection", IEEE Trans. on Soft. Eng., 19(6), 533-553 (June 1993).
    [14]
    Rothermel, G., Untch, R.H., Chu, C. and Harrold, M.J., "Test case prioritization: an empirical study", Proc. Int. Conf. on Software Maintenance (ICSM '99), Oxford, U.K., IEEE Press, pp. 179-188 (Aug. 1999).
    [15]
    Voas, J.M., "PIE: a dynamic failure-based technique", IEEE Trans. on Soft. Eng., 18(8), 717-727 (Aug. 1992).
    [16]
    Voas, J.M. and Miller, K.W., "Semantic metrics for software testability", The Journal of Systems and Software, 20(3), 207-216 (March 1993).
    [17]
    Voas, J.M. and Miller, K.W., "Software testability: the new verification", IEEE Software, 12(3), 17-28 (May 1995).
    [18]
    Voas, J.M., Miller, K.W. and Noonan, R., "Designing programs that do not hide data state errors during random black-box testing", Proc. 5th Int. Conf. on Putting into Practice Methods and Tools for Information System Design, Nantes, France (Sept. 1992).
    [19]
    Voas, J.M., Morell, L.J. and Miller, K.W., "Predicting where faults can hide from testing", IEEE Software, 8(2), 41-48 (March 1991).

    Cited By

    View all
    • (2018)A Systematic Review of Software Testability Measurement Techniques2018 International Conference on Computing, Power and Communication Technologies (GUCON)10.1109/GUCON.2018.8675006(299-303)Online publication date: Sep-2018
    • (2018)On strong mutation and the theory of subsuming logic‐based mutantsSoftware Testing, Verification and Reliability10.1002/stvr.166729:1-2Online publication date: 19-Apr-2018
    • (2017)The Theory of Composite Faults2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST.2017.12(47-57)Online publication date: Mar-2017
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM SIGSOFT Software Engineering Notes
    ACM SIGSOFT Software Engineering Notes  Volume 25, Issue 5
    Sept. 2000
    211 pages
    ISSN:0163-5948
    DOI:10.1145/347636
    Issue’s Table of Contents
    • cover image ACM Conferences
      ISSTA '00: Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis
      August 2000
      211 pages
      ISBN:1581132662
      DOI:10.1145/347324
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 August 2000
    Published in SIGSOFT Volume 25, Issue 5

    Check for updates

    Author Tags

    1. controllability
    2. domain-to-range ratio
    3. fault size
    4. observability
    5. testability

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)42
    • Downloads (Last 6 weeks)9

    Other Metrics

    Citations

    Cited By

    View all
    • (2018)A Systematic Review of Software Testability Measurement Techniques2018 International Conference on Computing, Power and Communication Technologies (GUCON)10.1109/GUCON.2018.8675006(299-303)Online publication date: Sep-2018
    • (2018)On strong mutation and the theory of subsuming logic‐based mutantsSoftware Testing, Verification and Reliability10.1002/stvr.166729:1-2Online publication date: 19-Apr-2018
    • (2017)The Theory of Composite Faults2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST.2017.12(47-57)Online publication date: Mar-2017
    • (2013)Testability Guidance Using a Process ModelingJournal of Software Engineering and Applications10.4236/jsea.2013.61207706:12(645-652)Online publication date: 2013
    • (2013)Demand‐driven propagation‐based strategies for testing changesSoftware Testing, Verification and Reliability10.1002/stvr.150123:6(499-528)Online publication date: 5-Jun-2013
    • (2012)SqueezinessInformation Processing Letters10.1016/j.ipl.2012.01.004112:8-9(335-340)Online publication date: 1-Apr-2012
    • (2011)An empirical study on the usage of testability information to fault localization in softwareProceedings of the 2011 ACM Symposium on Applied Computing10.1145/1982185.1982489(1398-1403)Online publication date: 21-Mar-2011
    • (2011)Prioritizing tests for software fault diagnosisSoftware—Practice & Experience10.1002/spe.106541:10(1105-1129)Online publication date: 1-Sep-2011
    • (2018)A program slicing-based method for effective detection of coincidentally correct test casesComputing10.1007/s00607-018-0591-z100:9(927-969)Online publication date: 1-Sep-2018
    • (2014)An analysis of the relationship between conditional entropy and failed error propagation in software testingProceedings of the 36th International Conference on Software Engineering10.1145/2568225.2568314(573-583)Online publication date: 31-May-2014
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media