Software Engineering: Coding and Testing
Software Engineering: Coding and Testing
Sukanya Basu
July 7, 2021
• A coding standard
• sets out standard ways of doing several things
• the way variables are named
• code is laid out
• maximum number of source lines allowed per function, etc.
• Coding guidelines: Provide general suggestions regarding coding style to be
followed
• leave actual implementation of the guidelines to the discretion of the
individual engineers
• Good organizations often develop their own coding standards and guidelines
• depending on what best suits their organization
• We will discuss some representative coding standards and guidelines
White-box Testing
• Designing white-box test cases
• Requires knowledge about the internal structure of software
• White-box testing is also called structural testing
• There are essentially two main approaches to design black box test cases
• Equivalence class partitioning
• Boundary value analysis
• An automated tool
• takes program source code as input
• produces reports regarding several important characteristics of the program
• such as size, complexity, adequacy of commenting, adherence to programming
standards, etc.
• Some program analysis tools
• Produce reports regarding the adequacy of the test cases
• There are essentially two categories of program analysis tools
• Static analysis tools
• Dynamic analysis tools
System testing
• System testing involves
• validating a fully developed system against its requirements
• In top-down approach
• testing waits till all top-level modules are coded and unit tested
• In bottom-up approach
• testing can start only after bottom level modules are ready
• Alpha Testing
• System testing is carried out by the test team within the developing
organization
• Beta Testing
• System testing performed by a select group of friendly customers
Stress Testing
• Stress testing (aka endurance testing)
• impose abnormal input to stress the capabilities of the software
• input data volume, input data rate, processing time, utilization of memory,
etc. are tested beyond the designed capacity
• Let
• N be the total number of errors in the system
• n of these errors be found by testing
• S be the total number of seeded errors
• s of the seeded errors be found during testing
• n/N = s/S
• N = S n/s
• Remaining defects:
N - n = n ((S - s)/ s)
• Example
• 100 errors were introduced
• 90 of these errors were found during testing
• 50 other errors were also found
• Remaining errors= 50 (100-90)/90 = 6
• The kind of seeded errors should match closely with existing errors
• However, it is difficult to predict the types of errors that exist
• Categories of remaining errors
• can be estimated by analyzing historical data from similar projects.