Testing
Testing
Testing
_______________________________________________________________________________
GUIDELINES FOR TABLE OF CONTENTS
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
APPENDICES
A. Checklist on Unit Testing
B. Checklist on Link/Integration Testing
C. Checklist on Function Testing
D. Checklist on System Testing
E. Checklist on Acceptance Testing
F. Checklist for Contracted out Software Development
G. List of Software Testing Certifications
H. Independent Testing Services
_______________________________________________________________________________
GUIDELINES FOR PURPOSE
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
1. PURPOSE
The major purpose of this document is to provide a set of application software testing
guidelines, to ensure that computer systems are properly tested, so as to pursue reliable
and quality computer systems.
_______________________________________________________________________________
1-1
GUIDELINES FOR SCOPE
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
2. SCOPE
This document is to give a set of guidelines for reference by application project teams
on the planning and carrying out of testing activities for application software. It is
important that users of this document (i.e. application project teams) should not
treat these guidelines as mandatory standards, but as a reference model; and
should adopt the guidelines according to individual project’s actual situation.
This document is suitable for development projects following the SDLC which is
defined in the Information Systems Procedures Manual. As for the maintenance
projects, these guidelines may need to be adjusted according to projects’ actual
situation. Such adjustments will be the responsibility of individual project team.
_______________________________________________________________________________
2-1
GUIDELINES FOR REFERENCES
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
3. REFERENCES
3.1 STANDARDS
Nil
_______________________________________________________________________________
3-1
GUIDELINES FOR DEFINITIONS AND
APPLICATION SOFTWARE TESTING CONVENTIONS
________________________________________________________________________________
4.1 DEFINITIONS
Nil
4.2 CONVENTIONS
Nil
_______________________________________________________________________________
4-1
GUIDELINES FOR OVERVIEW
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
5. OVERVIEW
This document, in essence, suggests a reference model for the planning and conduct of
Application Software Testing. The following serves as an overview of the model:
The PSC champions the project and is the ultimate decision-maker for the project. It
provides steer and support for the IPM and endorses acceptance of project deliverables.
The IPM manages the project and monitors the project implementation on a day-to-day
basis for the Project Owner/the PSC.
The PAT is responsible for overseeing project progress and managing quality assurance
activities, which include:
(a) recommending the test plans, test specifications and test summary report for
endorsement by the PSC; and
(b) co-ordinating, monitoring and resolving priority conflicts on the testing activities to
ensure smooth running of testing activities.
Please refer to the Practice Guide to Project Management for IT Projects under an
Outsourced Environment (PGPM) 2 for more details of the project organisation.
Testing is the process of executing a program with the intent of finding errors. Since it
is such a “destructive” process, it may be more effective and successful if the testing is
performed by an independent third party other than the original system analysts /
programmers.
A Test Group can be set up to carry out the testing activities especially for large-scale
projects or projects involving a large number of users. The emphasis here is on the
independent role of the Test Group, which does not necessarily mean dedicated
2
Practice Guide to Project Management for IT Projects under an Outsourced Environment [S19] can be found on ITG
InfoStation web site at http://itginfo.ccgo.hksarg/content/pgpm/index.asp.
_______________________________________________________________________________
5-1
GUIDELINES FOR OVERVIEW
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
The following figure shows an example of project organisation with the formation of a
Test Group and an optional Independent Testing Contractor providing independent
testing service. It is noted that the independent testing may be conducted by a Test
Group of in-house staff members as well as by external contractor.
(ii)
Contractor project manager (or IPM for in-house developed project) to enrich
the test plans by engaging his/her staff to draft test cases; internal IT staff to
_______________________________________________________________________________
5-2
GUIDELINES FOR OVERVIEW
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
check all major test plans; and business users to provide different test cases to
address different scenarios; and
(iii) Contractor project manager (or IPM for in-house developed project) to maintain
ongoing communication and collaboration among stakeholders by distributing
all major test plans and feedbacks to stakeholders regularly to keep them
informed the project progress throughout the whole system development stage.
A computer system is subject to testing from the following five different perspectives:
(v) To validate the integrated software against end-user needs and business
requirements (Acceptance Testing).
(Refer to Section 7)
_______________________________________________________________________________
5-3
GUIDELINES FOR OVERVIEW
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
(Refer to Section 8)
Monitor the day-to-day progress of the testing activities through the use of Test Progress
Reports.
Project teams may update the testing metrics information to a centralized database for
future test planning references.
3
Quality assurance staff should be the IPM or other delegated staff. However, those who are the members of the Test
Group should not take up the quality assurance role for the project if the tests are conducted by them but not by
Independent Testing Contractor.
_______________________________________________________________________________
5-4
GUIDELINES FOR GENERAL CONCEPTS
APPLICATION SOFTWARE TESTING OF TESTING
________________________________________________________________________________
All these objectives and the corresponding actions contribute to improve the quality and
reliability of the application software.
White-Box Testing, also known as Code Testing, focuses on the independent logical
internals of the software to assure that all code statements and logical paths have been
tested.
_______________________________________________________________________________
6-1
GUIDELINES FOR GENERAL CONCEPTS
APPLICATION SOFTWARE TESTING OF TESTING
________________________________________________________________________________
Testing
Approach Applied
(a) Unit Testing
- Testing of the program modules in isolation White Box Test
with the objective to find discrepancy between
the programs and the program specifications
_______________________________________________________________________________
6-2
GUIDELINES FOR GENERAL CONCEPTS
APPLICATION SOFTWARE TESTING OF TESTING
________________________________________________________________________________
(b) Test cases must be written for invalid and unexpected, as well as valid and
expected input conditions. A good test case is one that has a high probability of
detecting undiscovered errors. A successful test case is one that detects an
undiscovered error.
(c) A necessary part of a test case is defining the expected outputs or results.
(d) Do not plan testing effort on assumption that no errors will be found.
(e) If much more errors are discovered in a particular section of a program than other
sections, it is advisable to spend additional testing efforts on this error-prone
section as further errors are likely to be discovered there.
(g) The later in the development life cycle a fault is discovered, the higher the cost of
correction.
The testing mentioned in section 6.3 essentially follow a Bottom-Up approach, which
has one associated shortcoming, i.e. requirement and design errors could only be
identified at a late stage. To circumvent this, it is most important that the following
reviews should also be performed:
(i) to review the design specification with the objective to identify design items
_______________________________________________________________________________
6-3
GUIDELINES FOR GENERAL CONCEPTS
APPLICATION SOFTWARE TESTING OF TESTING
________________________________________________________________________________
(ii) proposed participants : e.g. system analysts (on system design), Test Group,
computer operators (on operational procedures), business analysts (on
functional specifications), quality assurance staff, users (on user interface and
manual procedures), domain architects (on architecture design) and where
possible domain experts from vendors (on specific domains such as storage
and network infrastructure).
(i) to walkthrough, at least the most critical ones, program modules with the
objective to identify errors as against the program specifications.
(Please refer to Section 11.2 regarding when these reviews should be carried out)
The independent Test Group conducts testing for the system with the aim to reveal
defects which cannot be found by the project team, which improves the quality of the
system in fulfilling project or business needs without any biases. Independent Test
Group sometimes engages professional software testers to perform testing according to
internationally recognised standards and methodologies which ensure the quality of
testing in conformance to requirements of the project or business.
The independent Test Group may acquire external professional testing services from an
Independent Testing Contractor to conduct various types of testing to help discover
hidden problems in the system. The benefits of acquiring external independent testing
services are:
(a) to improve the objectiveness and accuracy of test results, free from the influence of
the developers or users;
(b) to meet industry or business standards, or comply with policies/regulations;
(c) to bring in expertise and skills which may not be available in the project team;
(d) to incorporate more effective and standardised quality control and failure analysis;
(e) to enhance the quality, availability and reliability of the software, reducing the
costs of remediation during maintenance;
_______________________________________________________________________________
6-4
GUIDELINES FOR GENERAL CONCEPTS
APPLICATION SOFTWARE TESTING OF TESTING
________________________________________________________________________________
For example, if a project develops mission-critical system in which the quality is very
important, it is helpful to employ an external Independent Testing Contractor to perform
system testing to assure the quality and reduce the number of defects. If it is a high-risk
project, it is necessary to acquire a professional testing contractor to help conduct
function or system testing to increase the project success rate.
Independent testing may be conducted by an in-house team other than the project
development team; this can provide an impartial test result from the angle of an
independent third-party, though not an external contractor.
As the in-house independent team has more business domain knowledge than that of the
outsourced contractors, time and resources required for understanding the business
requirements before conducting testing will be saved.
_______________________________________________________________________________
6-6
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
7. LEVELS OF TESTING
7.1 UNIT TESTING
7.1.1 Scope of Testing
Unit Testing (or Module Testing) is the process of testing the individual subprograms,
subroutines, classes, or procedures in a program. The goal here is to find discrepancy
between the programs and the program specifications prior to its integration with other
units.
(a) Designers to include test guidelines (i.e. areas to be tested) in the design and
program specifications.
(b) Programmers to define test cases during program development time.
(c) Programmers to perform Unit Testing after coding is completed.
(d) Programmers to include the final results of Unit Testing in the program
documentation.
(a) Testing should first be done with correct data, then with flawed data.
(b) A program unit would only be considered as completed after the program
documentation (with testing results) of the program unit has been submitted to the
project leader/SA.
(c) If there are critical sections of the program, design the testing sequence such that
these sections are tested as early as possible. A ‘critical section’ might be a
complex module, a module with a new algorithm, or an I/O module.
(d) A unit testing will be considered as completed if the following criteria are satisfied:
(i) all test cases are properly tested without any errors;
(ii) there is no discrepancy found between the program unit and its specification;
and
(iii) program units of critical sections do not have any logical and functional
errors.
(e) Well-documented test cases can be reused for testing at later stages.
(f) A team-testing approach may be applied in Unit Testing to improve the coverage of
testing. A team is formed consisting of two or more programmers. The first
programmer will write the test case for the other programmer to code. Upon
completion, the first programmer will conduct unit test on the program according to
the test case. This helps to improve the accuracy and reduce the defects found in the
program.
_______________________________________________________________________________
7-1
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
(h) A code review may be performed after the Unit Testing. A code review is a process
of inspecting the source code of a software program line-by-line carefully by one or
more reviewers with the aid of automated testing tools, aiming to find out defects in
the code before proceeding to the next stage of development. It helps improve the
quality and maintainability of the program.
_______________________________________________________________________________
7-2
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
(a) Both control and data interface between the programs must be tested.
(ii) Bottom-up integration, as its name implies, begins assembly and testing with
modules at the lowest levels in the software structure. Because modules are
integrated from the bottom up, processing required for modules subordinate
to a given level is always available, and the need for stubs (i.e. dummy
modules) is eliminated.
_______________________________________________________________________________
7-3
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
(a) Test Group to prepare Function Testing test plan, to be endorsed by the PSC via the
PAT, before testing commences. (Please refer Section 9.3)
(b) Test Group to prepare a Function Testing test specification before testing
commences.
(c) Test Group, with the aid of the system analysts/programmers, to set up the testing
environment.
(d) Test Group (participated by user representatives) to perform Function Testing; and
upon fault found issue test incident reports to system analysts/programmers, who fix
up the liable errors.
(e) Test Group to report progress of Function Testing through periodic submission of
the Function Testing Progress Report.
(a) It is useful to involve some user representatives in this level of testing, in order to
give them familiarity with the system prior to Acceptance test and to highlight
differences between users’ and developers’ interpretation of the specifications.
However, degree of user involvement may differ from project to project, and even
from department to department, all depending on the actual situation.
(b) User involvement, if applicable, could range from testing data preparation to staging
out of the Function Testing.
(c) It is useful to keep track of which functions have exhibited the greatest number of
errors; this information is valuable because it tells us that these functions probably
still contain some hidden, undetected errors.
(d) The following methods are useful in designing test case in Function Testing:
(i)
Equivalence Partitioning
This testing technique is to divide (i.e. to partition) a set of test conditions into
groups or sets that can be considered the same. It assumes that all conditions
in one partition will be treated in the same way by the integrated software.
Therefore, only one test case is required to be designed to cover each partition.
For example, an input field accepting integer values between -1,000 and
+1,000 can be expected to be able to handle negative integers, zero and
positive integers. The test data is partitioned into three equivalent acceptable
set of values. In addition to numbers, this technique can also apply to any set
_______________________________________________________________________________
7-4
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
of data that is considered as equivalent e.g. file type. By using this technique,
testing one partition is equivalent to testing all of them.
_______________________________________________________________________________
7-5
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
System Testing is the process of testing the integrated software with regard to the
operating environment of the system (e.g. Recovery, Security, Performance, Storage,
etc.)
It may be worthwhile to note that the term has been used with different environments.
In its widest definition especially for the small scale projects, it also covers the scope of
Link/Integration Testing and the Function Testing.
For small scale projects which combine the Link/Integration Testing, Function Testing
and System Testing in one test plan and one test specification, it is crucial that the test
specification should include distinct sets of test cases for each of these three levels of
testing.
(a) Test Group to prepare a System Testing test plan, to be endorsed by the PSC via the
PAT, before testing commences.
(b) Test Group to prepare a System Testing test specification before testing commences.
(c) Test Group, with the aid of the designers/programmers, to set up the testing
environment.
(d) Test Group (participated by the computer operators and user representatives) to
perform System Testing; and upon fault found issue test incident reports to the
System analysts/programmers, who would fix up the liable errors.
(e) Test Group to report progress of the System Testing through periodic submission of
the System Testing Progress Report.
(a) 13 types of System Test are discussed below. It is not claimed that all 13 types will
be mandatory to every application system nor are they meant to be an exhaustive
list. To avoid possible overlooking, all of them should be explored when designing
test cases.
(i) Volume Testing
Volume testing is to subject the system to heavy volumes of data under normal
workload, and the attempt of which is to show that the system cannot handle
the volume of data specified in its objective. Since volume testing being
obviously expensive, in terms of people time, one must not go overboard.
However every system must be exposed to at least a few volume tests.
(ii) Stress Testing
Stress testing involves subjecting the program to heavy loads or stress. A
_______________________________________________________________________________
7-6
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
heavy stress is a peak volume of data encountered over a short span of time.
Although some stress test may experience ‘never will occur’ situations during
its operational use, but this does not imply that these tests are not useful. If
errors are detected by these ‘impossible’ conditions, the test is valuable,
because it is likely that the same errors might also occur in realistic, less-
stressful situations.
(iii) Performance Testing
Many programs have specific performance or efficiency objectives, such as
response times and throughput rates under certain workload and configuration
conditions. Performance testing should attempt to show that the system does
not satisfy its performance objectives.
(iv) Recovery Testing
If processing must continue during periods in which the application system is
not operational, then those recovery processing procedures/contingent actions
should be tested during the System test. In addition, the users of the system
should be involved in a complete recovery test so that not only the application
system is tested but the procedures for performing the manual aspects of
recovery are tested. Examples on recovery testing are Disaster Recovery
Testing and Backup and Restore Testing.
(v) Security Testing
Security testing is to test the adequacy of the security controls and procedures
imposed in the system by attempting to violate them. Test cases are devised to
subvert the imposed security checks and to search for security holes in existing
programs. For example, testing should attempt to access or modify data by an
individual not authorized to access or modify that data. Code review may be
conducted on programs to detect any insecure codes such as hard coding of
passwords.
that the completeness of operator instructions and the ease with which the
system can be operated can be properly evaluated. This testing is optional,
and should be conducted only when the environment is available.
(b) It is understood that in real situations, due to possibly environmental reasons, some
of the tests (e.g. Procedures test, etc.) may not be carried out in this stage and are to
be deferred to later stages. Such deferral may be acceptable provided that the
reasons are documented clearly in the Test Summary Report and the test be carried
out once the constraints removed.
_______________________________________________________________________________
7-8
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
Acceptance Testing is the process of comparing the application system to its initial
requirements and the current needs of its end users. The goal here is to determine
whether the software end product is acceptable to its users and meets the business
requirements.
(a) Test Group to prepare an Acceptance Testing test plan, which is to be endorsed by
the PSC via the PAT.
(c) Test Group, with the aid of the project team, to set up the testing environment;
(d) Test Group to perform testing according to the Acceptance Test Plan; and upon
fault found issue test incident reports to the system analysts/programmers, who
will fix up the liable error; and
(e) Test Group to report progress of the Acceptance Testing through periodic
submission of Acceptance Testing Progress Report.
(f) IPM to keep the overall Test Summary Report as documentation proof.
(a) In general, Acceptance Testing conducted by the business users is often called
“User Acceptance Testing”. The objective is to ensure that system satisfies the
acceptance criteria stated on the requirement document before releasing to daily
operational environment. Sufficient effort and involvement of both the project team
and end users in User Acceptance Testing is of paramount importance in ensuring
the quality of the system. The business impact and the cost and time for fixing a
production problem after live run are higher than that during the acceptance stage.
Spending more effort in User Acceptance Testing is not only a prudent approach in
delivering quality services, but will also reduce the overall costs and impact at the
maintenance stage.
User Acceptance Testing should be performed by business users to prove that a new
system works according to their understanding of their business requirements.
Business users have the necessary knowledge and understanding of business
requirements that IT testers may not have. It is therefore essential to get the
_______________________________________________________________________________
7-9
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
business users involved in the testing and not rely only on IT professionals.
Sufficient resources on business users shall be planned and allocated to conduct the
testing.
User Acceptance Testing for projects, which have relatively clear requirements
during development stage, normally account for about 5% to 10% of the project
implementation effort. For those projects with complicated requirements that are
not easy to be specified clearly during development stage, the effort required for
User Acceptance Testing may be up to 20% of the project implementation effort.
The effort required by business users can be estimated based on the number and
complexity of test cases required to verify the system against the business
requirements. Detailed test cases may only be available at a later stage of the
project, normally after System Analysis and Design. Nonetheless, the effort can be
estimated based on the following information when available:
(i) the number of business processes and their level of complexity, which would
determine how much time is required to go through those processes and
validate the acceptance criteria with the system;
(ii) the number of functional requirements and their level of complexity (can be
measured by number of data sets required for a function), which would
determine how much time is required to go through those functions and
validate the acceptance criteria with the system;
(iii) the number of inputs and outputs (screens, reports, interfaces, etc.) and their
level of complexity (can be measured by no. of fields on screens, reports, etc,
and their validation requirements), which would determine how much time is
required to go through those inputs and outputs and validate the acceptance
criteria with the system;
(iv) the total number of test cases, the complexity and the time required for
completing these test cases based on the parameters mentioned above.
The number of rounds of the testing to be conducted shall also be considered for the
total effort required. Two to three rounds of testing are normally considered the
minimum. More rounds of testing shall be considered if the business requirements
are complicated or the quality of the development team is less assured.
(e) For large-scale application systems, or systems involving new business or involving
a large number of new users, it may be better if additional user testing is performed
prior to the User Acceptance Testing. They are generally named as follows:
(i) Alpha Testing
The testing is taken place at the development environment by some end users.
Developers can observe problems while users are testing the system before
_______________________________________________________________________________
7-10
GUIDELINES FOR LEVELS OF TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
_______________________________________________________________________________
7-11
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
8. TEST DOCUMENTATION
8.1 INTRODUCTION
End Level
End Project
The above documentation will be subject to quality assurance checks for existence and
completeness by the Quality Assurance Staff.
Note: For small-sized projects, test plan and test specification as for Link/Integration
Testing, Function testing, System Testing could be combined.
_______________________________________________________________________________
8-1
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
To prescribe the scope, approach, resources, and schedule of testing activities for a level
of testing. To identify the items being tested, the features to be tested, the testing tasks
to be performed, and the personnel responsible for each task.
This test plan should be prepared for each level of testing except Unit Testing.
(*Do not plan on the assumption that each test case will only be executed once)
_______________________________________________________________________________
8-2
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
For each task, list the estimated effort required and duration.
For example,
Estimated Relative
Task Task Description Effort Calendar week
No. (man- 0 1 2 3 4 5 6 7 8 9...
weeks)
1 Prepare test specification on
-Test control procedures # XXXXX
-Test cases # XXXX
-Test data # XXX
-Testing environment # XX
(c) Responsibilities of relevant parties. For each party, specify their corresponding
responsibilities for the testing levels.
(d) Remarks. Describe any special constraint on the test procedures, identify any
special techniques and tools requirements that are necessary for the execution of this
test.
_______________________________________________________________________________
8-3
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
The test specification should provide, to the least, the following information:
It should be noted that definition of test cases is a “design” process and do vary for
different projects. Please refer to Appendices A to F for test case design checklists.
_______________________________________________________________________________
8-4
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
To document any event that occurs during the test process which requires investigation.
The report is to be issued to the system analysts/programmers for the errors found in the
testing progress.
(b) Summary
Summarize the incident.
Identify the test items involved indicating their versions/revision levels.
(i) Inputs
(ii) Expected results
(iii) Anomalies
(iv) Date and time
(v) Procedure step
(vi) Environment
(vii) Testers and Observers
(d) Impact
If known, indicate what impact this incident will have on test plan and test
procedure specification.
_______________________________________________________________________________
8-5
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
Incident Description
Impact
Results of Investigation
Investigated By Date
_______________________________________________________________________________
8-6
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
In order that progress of the test process is controlled properly, a periodic test progress
report should be prepared by the Test Group, and submitted to the PAT / IPM.
8.5.2 Terminology
No. of test cases specified (#1): the total number of test cases that have been specified.
No. of test cases tried at least once (#2): the number of the specified cases that have
been put into test execution at least once.
The percentage of #2 over #1 gives the percentage of the specified test cases that have
been executed at least once. More importantly, the complement of this percentage gives
the percentage of the specified test cases against which no test runs have ever been put
so far.
No. of test cases completed: the number of the specified test cases that have been
executed and with the expected output generated.
_______________________________________________________________________________
8-7
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
Levels of Testing :
Reporting Period :
(In the reporting period) (Over-all)
Notes: Testing effort should include ALL the effort directly related to the testing
activities, but excluding the administration overhead.
_______________________________________________________________________________
8-8
GUIDELINES FOR TEST DOCUMENTATION
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
To summarize the results of the testing activities for documentation purpose and
provide information for future test planning references.
(b) Remarks
_______________________________________________________________________________
8-9
GUIDELINES FOR TEST PLANNING AND
APPLICATION SOFTWARE TESTING CONTROL
________________________________________________________________________________
The objective of test planning is to prepare the blueprint used by the project personnel
and users to ensure that the application system achieves the level of correctness and
reliability desired by the user.
6. Purchase or develop test tools prior to the time when they will be needed; and
Some test control actions that may be performed during the test process are:
(i) decision making based on information gathered and reported in test activities;
(ii) resetting priority of tests when identified risk occurs e.g. late delivery of
programs to be tested; and
(iii) rescheduling of test activity because of late availability of the test environment.
_______________________________________________________________________________
9-1
GUIDELINES FOR AUTOMATED TOOLS FOR TESTING
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
10.1 INTRODUCTION
Because software testing often accounts for as much as 50% of all effort expended on a
software implementation project, tools that can reduce test time (but without reducing
thoroughness) are very valuable. For that purpose, use of the following types of
automated tools would be most desirable.
_______________________________________________________________________________
10-1
GUIDELINES FOR SUMMARY
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
11. SUMMARY
_______________________________________________________________________________
11-1
GUIDELINES FOR SUMMARY
APPLICATION SOFTWARE TESTING
________________________________________________________________________________
_______________________________________________________________________________
11-2
GUIDELINES FOR APPENDIX A
APPLICATION SOFTWARE TESTING CHECKLIST ON UNIT TESTING
________________________________________________________________________________
(This checklist to suggest areas for the definition of test cases is for information purpose
only; and in no way is it meant to be an exhaustive list. Please also note that a negative
tone that matches with section 6.1 suggestions has been used)
Input
1. Validation rules of data fields do not match with the program/data specification.
Output
1. Output messages are shown with misspelling, or incorrect meaning, or not uniform.
2. Output messages are shown while they are supposed not to be; or they are not shown while
they are supposed to be.
3. Reports/Screens do not conform to the specified layout with misspelled data labels/titles,
mismatched data label and information content, and/or incorrect data sizes.
_______________________________________________________________________________
A- 1
GUIDELINES FOR APPENDIX A
APPLICATION SOFTWARE TESTING CHECKLIST ON UNIT TESTING
________________________________________________________________________________
File Access
4. Program data storage areas do not match with the file layout.
7. Deadlock occurs when the same record/file is accessed or updated by more than 1 user.
Internal Logic
2. Mathematical accuracy and rounding does not conform to the prescribed rules.
2. Program execution sequence does not follow the JCL condition codes or control scripts
setting.
Program Documentation
_______________________________________________________________________________
A- 2
GUIDELINES FOR APPENDIX A
APPLICATION SOFTWARE TESTING CHECKLIST ON UNIT TESTING
________________________________________________________________________________
Performance
_______________________________________________________________________________
A- 3
GUIDELINES FOR APPENDIX B
APPLICATION SOFTWARE TESTING CHECKLIST ON LINK/INTEGRATION TESTING
________________________________________________________________________________
(This checklist to suggest areas for the definition of test cases is for information purpose
only; and in no way is it meant to be an exhaustive list. Please also note that a negative
tone that matches with section 6.1 suggestions has been used)
1. Global variables have different definition and/or attributes in the programs that
referenced them.
Program Interfaces
1. The called programs are not invoked while they are supposed to be.
2. Any two interfaced programs have different number of parameters, and/or the attributes
of these parameters are defined differently in the two programs.
3. Passing parameters are modified by the called program while they are not supposed to
be.
4. Called programs behaved differently when the calling program calls twice with the same
set of input data.
5. File pointers held in the calling program are destroyed after another program is called.
1. The same error is treated differently (e.g. with different messages, with different
termination status etc.) in different programs.
_______________________________________________________________________________
B- 1
GUIDELINES FOR APPENDIX C
APPLICATION SOFTWARE TESTING CHECKLIST ON FUNCTION TESTING
________________________________________________________________________________
(This checklist to suggest areas for the definition of test cases is for information purpose
only; and in no way is it meant to be an exhaustive list. Please also note that a negative
tone that matches with section 6.1 suggestions has been used)
Comprehensiveness
Correctness
1. The developed transaction/report does not achieve the said business function.
_______________________________________________________________________________
C- 1
GUIDELINES FOR APPENDIX D
APPLICATION SOFTWARE TESTING CHECKLIST ON SYSTEM TESTING
________________________________________________________________________________
(This checklist to suggest areas for the definition of test cases is for information purpose
only; and in no way is it meant to be an exhaustive list. Please also note that a negative
tone that matches with section 6.1 suggestions has been used)
Volume Testing
Stress Testing
1. The system cannot handle a pre-defined number of transactions over a short span of time.
Performance Testing
1. The response times are excessive over a pre-defined time limit under certain workloads.
Recovery Testing
Security Testing
2. The system does not log out automatically in event of a terminal failure.
Procedure Testing
Regression Testing
1. The sub-system / system being installed affect the normal operation of the other systems /
sub-systems already installed.
_______________________________________________________________________________
D- 1
GUIDELINES FOR APPENDIX D
APPLICATION SOFTWARE TESTING CHECKLIST ON SYSTEM TESTING
________________________________________________________________________________
Operation Testing
1. The information inside the operation manual is not clear and concise with the application
system.
2. The operational manual does not cover all the operation procedures of the system.
_______________________________________________________________________________
D- 2
GUIDELINES FOR APPENDIX E
APPLICATION SOFTWARE TESTING CHECKLIST ON ACCEPTANCE TESTING
________________________________________________________________________________
(This checklist to suggest areas for the definition of test cases is for information purpose
only; and in no way is it meant to be an exhaustive list. Please also note that a negative
tone that matches with section 6.1 suggestions has been used)
Comprehensiveness
Correctness
1. The developed transaction/report does not achieve the said business function.
_______________________________________________________________________________
E- 1
GUIDELINES FOR APPENDIX F CHECKLIST FOR
APPLICATION SOFTWARE TESTING CONTRACTED-OUT SOFTWARE DEVELOPMENT
________________________________________________________________________________
1. Tailor and suitably incorporate the following in the tender specification or work assignment
brief as appropriate.
2. Check for the inclusion of an overall test plan in the tender proposal or accept it as the first
deliverable from the contractor.
3. Review and accept the different types of test plan, test specifications and test results.
4. Wherever possible, ask if there are any tools (ref. Section 10) to help demonstrate the
completeness and test the coverage of the software developed.
7. Ask for the contractor’s contribution in preparing the Acceptance Test process.
8. Ask for a Test Summary Report (ref. section 8.6) at the end of the project.
9. If third party independent testing service is required for a particular type of testing (e.g.
system testing), define the scope of the testing service needed and state the requirements of
the testing in a separate assignment brief or service specification for procurement of the
independent testing services. Such service should be separately acquired from the
procurement of software development. Besides, the service requirement for support and
coordination with the Independent Testing Contractor should be added to the software
development tender or work assignment brief.
_______________________________________________________________________________
F- 1
GUIDELINES FOR APPENDIX G
APPLICATION SOFTWARE TESTING LIST OF SOFTWARE TESTING CERTIFICATIONS
________________________________________________________________________________
QAI was established in 1980 in Orlando, Florida in U.S.A. It provides educational and training
programs for development of IT professionals in different aspects including software quality
assurance and testing. QAI issues the following certifications that qualify professional software
testers and test managers:
(i) Certified Associate in Software Testing (CAST)
(ii) Certified Software Tester (CSTE)
(iii) Certified Manager of Software Testing (CMST)
It was founded in 1957 aiming to promote the study and practice of information technology for
benefit the public. In March 2010, the ISTQB UK Testing Board and BCS, The Chartered
Institute for IT became partners in providing software testing examinations at Foundation and
Advanced level. It also provides other certificates such as the Intermediate certificate for
software testing.
4
Reference website: http://www.qaiusa.com
5
Reference website: http://www.istqb.org/about-istqb.html,
6
Reference website: http://certifications.bcs.org/category/15582
7
Reference website: http://sk.neea.edu.cn/
_______________________________________________________________________________
G- 1
GUIDELINES FOR APPENDIX H
APPLICATION SOFTWARE TESTING INDEPENDENT TESTING SERVICES
________________________________________________________________________________
The project team should consider which parts of testing to outsource, including test levels, test
types and test activities.
independent testing contractor. Among these, test types requiring regular repetition and test
automation is more suitable for outsourcing in order to save costs. Examples are function
test, regression test and compatibility test. Test types such as performance test, load test and
security test may also be outsourced to professional testing contractors who have specialized
tools, techniques and expertise to perform the tests, on a case-by-case basis.
_______________________________________________________________________________
H- 4
GUIDELINES FOR APPENDIX H
APPLICATION SOFTWARE TESTING INDEPENDENT TESTING SERVICES
________________________________________________________________________________
Roles Responsibilities
Test Manager / Test (i) Schedule and assign duties to subordinates
Specialist (ii) Plan and manage the test process
(iii) Resolve technical and non-technical issues and
disputes related to the test process
(iv) Establish procedures and/or automated performance
measurement capability to monitor the progress of
testing
(v) Liaise with project team and developer’s team on day-
to-day testing work
(vi) Develop project management plans and quality
control parameters
Test Coordinator (i) Schedule and assign testing tasks to team members
(ii) Assist in setting up the testing environment
(iii) Co-ordinate with all working parties in projects
(iv) Define the approach, methodology and tools used in
testing
(v) Perform quality control and quality assurance in test
process
(vi) Prepare test scenarios and produce documentation
(vii) Provide support and troubleshoot problems in test
process.
(viii)Provide status of progress and defects to Test
Manager
(ix) Assure conformance to standards and test procedures
Test Lead (i) Supervise and lead the work of testing in the same
team
(ii) Assign testing tasks to testers and assist in test
execution when required
(iii) Produce and maintain testing status documentation
Tester (i) Conduct testing according to test procedures
(ii) Produce and maintain test documentation
_______________________________________________________________________________
H- 5