Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article
Free access

An experimental study of fault detection in user requirements documents

Published: 01 April 1992 Publication History

Abstract

This paper describes a software engineering experiment designed to confirm results from an earlier project which measured fault detection rates in user requirements documents (URD). The experiment described in this paper involves the creation of a standardized URD with a known number of injected faults of specific type. Nine independent inspection teams were given this URD with instructions to locate as many faults as possible using the N-fold requirements inspection technique developed by the authors. Results obtained from this experiment confirm earlier conclusions about the low rate of fault detection in requirements documents using formal inspections and the advantages to be gained using the N-fold inspection method. The experiment also provides new results concerning variability in inspection team performance and the relative difficulty of locating different classes of URD faults.

References

[1]
~AvIzmMs, A. The N-version approach to fault tolerant software. IEEE Trans. Softw. Eng. ~11, 12 (Dec. 1985).
[2]
~BO~M, B. Industrial software metrics top ten list IEEE Softw. (Sept 1987).
[3]
~COLL~U*ELLO J. S. The software technical review process. Curriculum Module SEI-CM-3-1.5, ~Software Engineering Institute, Carnegie Mellon Univ., Pittsburgh, Pa., 1988.
[4]
~FACAN, M.E. Design and code inspections to reduce errors in program developments. IBM ~Syst J., 3 (1976)
[5]
~FAmLEY, R. Software Engineering Concepts. McGraw-Hfil, New York, 1985.
[6]
~JONES, C.B. Systematic Software Development Using VDM. Prentice-Hall, Englewood Chffs, ~N.J., 1986.
[7]
~M~Ti~, J, AND TSAI, W.T. N-fold inspection: A requirements analysis technique. Commun ~ACM. 33, 2 (Feb. 1990).
[8]
~MOILER, T., AND SCHNEIDER, G M. Methodology and experimental research m software ~engineering. Int. J Man-Mach. Stud. 16 (1982).
[9]
~RAMAMOOR?HY, R. Application of a methodology for the development and validation of ~process control software. IEEE Trans. Softw. Eng. SE-7, 6 (Nov. 1981).

Cited By

View all
  • (2024)Classifying Ambiguous Requirements: An Explainable Approach in Railway Industry2024 IEEE 32nd International Requirements Engineering Conference Workshops (REW)10.1109/REW61692.2024.00007(12-21)Online publication date: 24-Jun-2024
  • (2023)Influence Factors on User Manual Engagement in the Context of Smart Wearable DevicesElectronics10.3390/electronics1217353912:17(3539)Online publication date: 22-Aug-2023
  • (2023)Aggregating N-fold Requirements Inspection ResultsProceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering10.1145/3593434.3593465(339-347)Online publication date: 14-Jun-2023
  • Show More Cited By

Recommendations

Reviews

Paul W. Abrahams

The authors propose a way to detect faults in a user requirements document (URD)—a user's description of the functionality and performance of a software product—by carrying out N formal inspections of the document in parallel. The authors hypothesize that the N separate inspection teams do not significantly duplicate each others' efforts, so that the total number of faults detected will be much higher than the number found by any one team during a single inspection. To verify their hypothesis, they carried out a controlled experiment in which nine teams of computer science graduate students formally inspected a requirements document describing a software system for real-time track control for railroads. The original document was written by a knowledgable railroad engineer. Before the experiment, the document was independently inspected and reviewed by 40 students and by the authors in order to obtain as correct a document as possible. That document was then seeded with errors, including ones discovered by the preliminary reviewers, and turned over to the nine teams. Each team member reviewed the document individually, and then each team met as a group to discuss and debate the faults found by the individuals. Although the proportion of the faults found by a single team averaged 35 percent (with a maximum of 50 percent for the best team), the global coverage, that is, the proportion of faults found by at least one team, was 78 percent. (Apparently, the nine teams did not unearth any faults not found in the preliminary inspection.) The paper describes the review process: “Following this individual review, [the team] met…for a formal group review, also lasting about two hours. In this…meeting, each team member would identify what he or she believed to be a problem with the existing URD, which would then be discussed and debated. If the team agreed that this did indeed represent a requirements fault, then they filled out a Fault Report….” From this description, it appears that the interactions among team members during the meeting did not lead to the discovery of additional faults, so the faults found by a team were just the aggregate of the faults found by its members individually. The paper did not convince me that the costs of N -fold inspection are worth the benefits. If a single team cannot do better, on the average, than a 35 percent fault detection rate, then our ability to review requirements documents is sadly limited, and I wonder whether raising the rate to 78 percent is really worth it. The possibility remains that more skilled inspection teams, or teams spending more time on the project, would have detected nearly all of the faults. But were the single-team detection rates on the order of 98 percent rather than 35 percent, the results of the experiment described here would probably not apply. Indeed, the results of N -version programming are not encouraging; Nancy Leveson reports that common errors usually crop up in the products of independent programming teams (see, for example, her work with J. C. Knight [1]). The strongest argument for N -fold inspection is that since each requirements error costs so much more to fix later on, finding even one additional fault is worth the price.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 1, Issue 2
April 1992
70 pages
ISSN:1049-331X
EISSN:1557-7392
DOI:10.1145/128894
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 April 1992
Published in TOSEM Volume 1, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. fault detection
  2. inspections
  3. user requirements

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)136
  • Downloads (Last 6 weeks)28
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Classifying Ambiguous Requirements: An Explainable Approach in Railway Industry2024 IEEE 32nd International Requirements Engineering Conference Workshops (REW)10.1109/REW61692.2024.00007(12-21)Online publication date: 24-Jun-2024
  • (2023)Influence Factors on User Manual Engagement in the Context of Smart Wearable DevicesElectronics10.3390/electronics1217353912:17(3539)Online publication date: 22-Aug-2023
  • (2023)Aggregating N-fold Requirements Inspection ResultsProceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering10.1145/3593434.3593465(339-347)Online publication date: 14-Jun-2023
  • (2023)Development and application of user review quality model for embedded systemMicroprocessors & Microsystems10.1016/j.micpro.2020.10302974:COnline publication date: 17-Oct-2023
  • (2023)Diversity based multi-cluster over sampling approach to alleviate the class imbalance problem in software defect predictionInternational Journal of System Assurance Engineering and Management10.1007/s13198-023-02031-xOnline publication date: 27-Jul-2023
  • (2022)Ambiguity in Requirements Engineering: Towards a Unifying FrameworkFrom Software Engineering to Formal Methods and Tools, and Back10.1007/978-3-030-30985-5_12(191-210)Online publication date: 11-Mar-2022
  • (2021)An empirical study toward dealing with noise and class imbalance issues in software defect predictionSoft Computing - A Fusion of Foundations, Methodologies and Applications10.1007/s00500-021-06096-325:21(13465-13492)Online publication date: 1-Nov-2021
  • (2020)Faulty Requirements Made Valuable: On the Role of Data Quality in Deep Learning2020 IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering (AIRE)10.1109/AIRE51212.2020.00016(61-69)Online publication date: Sep-2020
  • (2019)An empirical study on the potential usefulness of domain models for completeness checking of requirementsEmpirical Software Engineering10.1007/s10664-019-09693-x24:4(2509-2539)Online publication date: 1-Aug-2019
  • (2016)Elicitation Practices That Can Decrease Vulnerability to Off-Nominal Behaviors: Lessons from using the Causal Component ModelSAE International Journal of Passenger Cars - Electronic and Electrical Systems10.4271/2016-01-810910:1(83-94)Online publication date: 27-Sep-2016
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media