Software Testing
Software Testing
assurance and I.T. Enabled Services that match the high global standards in terms of efficiency
and accuracy.
Welcome to the Software Quality Assurance (QA) and Testing Information Center. It offers
articles, best practices, methodologies, FAQ, guides, books, directories, and tools that are related
software QA and testing. Popular materials include job interview FAQ, test plan samples, Win
Runner FAQ, and more.
1. *Verification* is the checking or testing of items, including software, for conformance and
consistency by evaluating the results against pre-specified requirements. [*V**erification: Are we
building the system right?*].
2. *Error Detection*: Testing should intentionally attempt to make things go wrong to determine if
things happen when they shouldn't or things don't happen when they should.
3. *Validation* looks at the system correctness – i.e. is the process of checking that what has been
specified is what the user actually wanted. [*Validation: Are we building the right system?*]
In other words, validation checks to see if we are building what the customer wants/needs, and
verification checks to see if we are building that system correctly. Both verification and validation
are necessary, but different components of any testing activity.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of
analysing a software item to detect the differences between existing and required conditions (that
is defects/errors/bugs) and to evaluate the features of the software item. Remember: The purpose
of testing is verification, validation and error detection in order to find problems – and the purpose
of finding those problems is to get them fixed.
Software Testing
Testing involves operation of a system or application under controlled conditions and evaluating the
results. Every Test consists of 3 steps :
Automated Testing
Automated testing is as simple as removing the "human factor" and letting the computer do the
thinking. This can be done with integrated debug tests, to much more intricate processes. The idea
of the these tests is to find bugs that are often very challenging or time intensive for human testers
to find. This sort of testing can save many man hours and can be more "efficient" in some cases.
But it will cost more to ask a developer to write more lines of code into the game (or an external
tool) then it does to pay a tester and there is always the chance there is a bug in the bug testing
program. Reusability is another problem; you may not be able to transfer a testing program from
one title (or platform) to another. And of course, there is always the "human factor" of testing that
can never truly be replaced.
SQA India could be a useful resource for Software Quality assurance. We follow Software
Quality Assurance (SQA) methods that adhere to the quality assurance at every phase of SDLC
(Software Development Life Cycle). A quality control checklist is developed for all phases of SDLC.
We aim to provide quality and consistent results through automated process that have been tested
over time. Purpose of the SQA Plan is to establish a uniform Web / Software Development
Process, which is applicable throughout the software development life cycle.
Welcome to the Software Quality Assurance (QA) and Testing Information Center. It offers
articles, best practices, methodologies, FAQ, guides, books, directories, and tools that are related
software QA and testing. Popular materials include job interview FAQ, test plan samples, Win
Runner FAQ, and more.
Our Software Testing Procedure
Software testing challenges the assumptions, risks, and uncertainty inherent in the work of other
disciplines, and addresses those concerns using concrete demonstration and impartial evaluation.
Testing Methods
1. White Box
Also called ‘Structural Testing / Glass Box Testing’ is used for testing the code keeping the system specs
in mind. Inner working is considered and thus Developers Test..
○ Mutation Testing
Number of mutants of the same program created with minor changes and none of their result
should coincide with that of the result of the original program given same test case.
Basic Path Testing
Testing is done based on Flow graph notation, uses Cyclometric complexity & Graph matrices.
Control Structure Testing
The Flow of control execution path is considered for testing. It does also checks :-
Conditional Testing : Branch Testing, Domain Testing.
Data Flow Testing.
Loop testing :Simple, Nested, Conditional, Unstructured Loops.
2. Black Box
Also called ‘Functional Testing’ as it concentrates on testing of the functionality rather than the internal
details of code.
Test cases are designed based on the task descriptions
○ Comparison Testing
Test cases results are compared with the results of the test Oracle.
Graph Based Testing
Cause and effect graphs are generated and cyclometric complexity considered in using the test
cases.
Boundary Value Testing
Boundary values of the Equivalence classes are considered and tested as they generally fail in
Equivalence class testing.
Equivalence class Testing
Test inputs are classified into Equivalence classes such that one input check validates all the
input values in that class.
Gray Box Testing : Similar to Black box but the test cases, risk assessments, and test methods involved in
gray box testing are developed based on the knowledge of the internal data and flow structures
Levels of Testing
1. Unit Testing.
○ Unit Testing is primarily carried out by the developers themselves.
○ Deals functional correctness and the completeness of individual program units.
○ White box testing methods are employed
2. Integration Testing.
○ Integration Testing: Deals with testing when several program units are integrated.
○ Regression testing : Change of behavior due to modification or addition is called
‘Regression’. Used to bring changes from worst to least.
○ Incremental Integration Testing : Checks out for bugs which encounter when a module has
been integrated to the existing.
○ Smoke Testing : It is the battery of test which checks the basic functionality of program. If
fails then the program is not sent for further testing.
3. System Testing.
○ System Testing - Deals with testing the whole program system for its intended purpose.
○ Recovery testing : System is forced to fail and is checked out how well the system recovers
the failure.
○ Security Testing : Checks the capability of system to defend itself from hostile attack on
programs and data.
○ Load & Stress Testing : The system is tested for max load and extreme stress points are
figured out.
○ Performance Testing : Used to determine the processing speed.
○ Installation Testing : Installation & uninstallation is checked out in the target platform.
4. Acceptance Testing.
○ UAT ensures that the project satisfies the customer requirements.
○ Alpha Testing : It is the test done by the client at the developer’s site.
○ Beta Testing : This is the test done by the end-users at the client’s site.
○ Long Term Testing : Checks out for faults occurrence in a long term usage of the product.
○ Compatibility Testing : Determines how well the product is substantial to product transition
What is Software Testing?
Software testing is more than just error detection;
Testing software is operating the software under controlled conditions, to (1) *verify* that it behaves
"as specified"; (2) to *detect* *errors*, and (3) to *validate* that what has been specified is what the
user actually wanted.
1. *Verification* is the checking or testing of items, including software, for conformance and
consistency by evaluating the results against pre-specified requirements. [*V**erification: Are
we building the system right?*]
2. *Error Detection*: Testing should intentionally attempt to make things go wrong to determine if
things happen when they shouldn't or things don't happen when they should.
3. *Validation* looks at the system correctness – i.e. is the process of checking that what has
been specified is what the user actually wanted. [*Validation: Are we building the right
system?*]
In other words, validation checks to see if we are building what the customer wants/needs, and
verification checks to see if we are building that system correctly. Both verification and validation
are necessary, but different components of any testing activity.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of
analysing a software item to detect the differences between existing and required conditions (that
is defects/errors/bugs) and to evaluate the features of the software item. Remember: The purpose
of testing is verification, validation and error detection in order to find problems – and the purpose
of finding those problems is to get them fixed.
Software Testing
Testing involves operation of a system or application under controlled conditions and evaluating the results.
Every Test consists of 3 steps :
Planning : Inputs to be given, results to be obtained and the process to proceed is to planned.
Execution : preparing test environment, Completing the test, and determining test results.
Evaluation : compare the actual test outcome with what the correct outcome should have been.
Automated Testing
Automated testing is as simple as removing the "human factor" and letting the computer do the
thinking. This can be done with integrated debug tests, to much more intricate processes. The idea
of the these tests is to find bugs that are often very challenging or time intensive for human testers
to find. This sort of testing can save many man hours and can be more "efficient" in some cases.
But it will cost more to ask a developer to write more lines of code into the game (or an external
tool) then it does to pay a tester and there is always the chance there is a bug in the bug testing
program. Reusability is another problem; you may not be able to transfer a testing program from
one title (or platform) to another. And of course, there is always the "human factor" of testing that
can never truly be replaced.
Other successful alternatives or variation: Nothing is infallible. Realistically, a moderate split of
human and automated testing can rule out a wider range of possible bugs, rather than relying
solely on one or the other. Giving the testere limited access to any automated tools can often help
speed up the test cycle.
Release Acceptance Test
The release acceptance test (RAT), also referred to as a build acceptance or smoke test, is run on
each development release to check that each build is stable enough for further testing. Typically,
this test suite consists of entrance and exit test cases plus test cases that check mainstream
functions of the program with mainstream data. Copies of the RAT can be distributed to developers
so that they can run the tests before submitting builds to the testing group. If a build does not pass
a RAT test, it is reasonable to do the following:
• Suspend testing on the new build and resume testing on the prior build until another build is
received.
• Report the failing criteria to the development team.
• Request a new build.
Functional Acceptance Simple Test
The functional acceptance simple test(FAST) is run on each development release to check that key
features of the program are appropriately accessible and functioning properly on the at least one
test configuration (preferable the minimum or common configuration).This test suite consists of
simple test cases that check the lowest level of functionality for each command- to ensure that
task-oriented functional tests(TOFTs) cna be performed on the program. The objective is to
decompose the functionality of a program down to the command level and then apply test cases to
check that each command works as intended. No attention is paid to the combination of these
basic commands, the context of the feature that is formed by these combined commands, or the
end result of the overall feature. For example, FAST for a File/Save As menu command checks
that the Save As dialog box displays. However, it does not validate that the overall file-saving
feature works nor does it validate the integrity of save files.
Deployment Acceptance Test
The configuration on which the Web system will be deployed will often be much different from
develop-and-test configurations. Testing efforts must consider this in the preparation and writing of
test cases for installation time acceptance tests. This type of test usually includes the full
installation of the applications to the targeted environments or configurations
Structural System Testing Techniques
Stress Testing:
" To determine if the system can function when subject to large volumes.
" It includes testing of
input transactions
Internal tables
Usage Disk Space
Out put
Communication
Computer capacity
Interaction with people.
" To simulate production environment
" Normal or above normal volumes of transactions can be processed through the transaction
Objectives within expected time frame.
" Application system would be able to process larger volume of data.
" System capacity should have sufficient resources to meet expected turnaround time.
" It should simulate as closely as possible to production environment.
" Online system should be stress tested with users entering test data with normal or above
normal pace.
" Batch system should be tested with huge volumes/ numbers of batches
" The test conditions should have error conditions.
How to Use
" Transactions used in stress testing are obtained from following 3 sources :
Test data generators
Test transactions created by test group
Transactions which were previously used in production.
" In stress testing the system should run as it would in the production environment.
" When there is uncertainty that system will work with huge volumes of data and without
generating any faults.
When to use " Attempt is made to break system with huge amount of data.
" Most commonly used technique to test for online transaction systems as other
techniques are not effective.
" Sufficient disk space allocated
Examples
" Communication lines are adequate
" Amount of time taken to prepare for testing
Disadvantage
" Amount of resources utilized during test execution.
Execution Testing:
" To determine whether the system achieves the desired level of proficiency in the production
status.
" Used to verify -
Response time
Usage
Turn around time
Design performance.
" Test execution can be done using the simulated system and actual system.
" The system either can be tested as a whole or in parts.
" To determine whether the system can meet the specific performance criteria
" Verify whether system make optimum use of hardware and software.
Objectives
" Determining response time to online use requests
" Determining transaction processing turnaround time.
" Can be performed in any phase of SDLC
" Used to evaluate single aspect of system
How to " Executed in following manner -
Use Using h/w and s/w monitor
Simulation of functioning using simulation model
Creating quick or dirty programs to evaluate approximate performance of completed system.
" Should be used early in SDLC
When to
" Should be performed when it is known that the results can be used to make changes to the
use
system structure.
" Transaction turnaround time adequacy
Examples
" Optimum use of h/w and s/w.
Security measures protect Web systems from both internal and external threats. E-commerce
concerns and the growing popularity of Web-based applications have made security testing
increasingly relevant. Security tests determine whether a company's security policies have been
properly implemented; they evaluate the functionality of existing systems, not whether the security
policies that have been implemented are appropriate.
PRIMARY COMPONENTS REQUIRING SECURITY TESTING
• Application software
• Database
• Servers
• Client workstations
• Networks
System-Level Test
System-level tests consists of batteris of tests that are designed to fully exercise a program as a
whole and check that all elements of the integrated system function properly.
Functional System Testing
System tests check that the software functions properly from end-to-end. The components of the
system include: A database, Web-enable application software modules, Web servers, Web-
enabled application frameworks deploy Web browser software, TCP/IP networking routers, media
servers to stream audio and video, and messaging services for email.
A common mistake of test professionals is to believe that they are conducting system tests while
they are actually testing a single component of the system. For example, checking that the Web
server returns a page is not a system test if the page contains only a static HTML page.
System testing is the process of testing an integrated hardware and software system to verify that
the system meets its specified requirements. It verifies proper execution of the entire set of
application components including interfaces to other applications. Project teams of developers and
test analysts are responsible for ensuring that this level of testing is performed.
System testing checklist include question about:
• Functional completeness of the system or the add-on module
• Runtime behavior on various operating system or different hardware configurantions.
• Installability and configurability on various systems
• Capacity limitation (maximum file size, number of records, maximum number of concurrent users, etc.)
• Behavior in response to problems in the programming environment (system crash, unavailable
network, full hard-disk, printer not ready)
• Protection against unauthorized access to data and programs.
Seach results System test environment Black Box and White Box technique
Type of
Wave Development Style Test style
application
If the system verifies user input according to business rules, then that needs to work properly. For example, a
State field may be checked against a list of valid values. If this is the case, you need to verify that the list is
complete and that the program actually calls the list properly (add a bogus value to the list and make sure the
system accepts it).
14. Cookies
Most users only like the kind with sugar, but developers love web cookies. If the system uses them, you need to
check them. If they store login information, make sure the cookies work. If the cookie is used for statistics,
verify that totals are being counted properly. And you'll probably want to make sure those cookies are
encrypted too, otherwise people can edit their cookies and skew your statistics. Application specific functional
requirements Most importantly, you want to verify the application specific functional requirements. Try to
perform all functions a user would: place an order, change an order, cancel an order, check the status of the
order, change shipping information before an order is shipped, pay online, ad naseum.
This is why your users will show up on your doorstep, so you need to make sure you can do what you advertise
Black Box testing for web-based application: (3)
16. Interface Testing
Many times, a web site is not an island. The site will call external servers for additional data, verification of data
or fulfillment of orders.
16. Server interface
The first interface you should test is the interface between the browser and the server. You should attempt
transactions, then view the server logs and verify that what you're seeing in the browser is actually happening
on the server. It's also a good idea to run queries on the database to make sure the transaction data is being
stored properly.
17. External interfaces
Some web systems have external interfaces. For example, a merchant might verify credit card transactions
real-time in order to reduce fraud. You will need to send several test transactions using the web interface. Try
credit cards that are valid, invalid, and stolen. If the merchant only takes Visa and MasterCard, try using a
Discover card. (A script can check the first digit of the credit card number: 3 for American Express, 4 for Visa, 5
for MasterCard, or 6 for Discover, before the transaction is sent.) Basically, you want to make sure that the
software can handle every possible message returned by the external server.
18. Error handling
One of the areas left untested most often is interface error handling. Usually we try to make sure our system
can handle all of our errors, but we never plan for the other systems' errors or for the unexpected. Try leaving
the site mid-transaction - what happens? Does the order complete anyway? Try losing the internet connection
from the user to the server. Try losing the connection from the server to the credit card verification server. Is
there proper error handling for all these situations? Are charges still made to credit cards? Is the interruption is
not user initiated, does the order get stored so customer service reps can call back if the user doesn't come
back to the site?
19. Compatibility
You will also want to verify that the application can work on the machines your customers will be using. If the
product is going to the web for the world to use, you will need to try different combinations of operating
system, browser, video setting and modem speed.
20. Operating systems
Does the site work for both MAC and IBM-Compatibles? Some fonts are not available on both systems, so make
sure that secondary fonts are selected. Make sure that the site doesn't use plug-ins only available for one OS, if
your users will use both.
21. Browsers
Does your site work with Netscape? Internet Explorer? Lynx? Some HTML commands or scripts only work for
certain browsers. Make sure there are alternate tags for images, in case someone is using a text browser. If
you're using SSL security, you only need to check browsers 3.0 and higher, but verify that there is a message
for those using older browsers.
22. Video settings
Does the layout still look good on 640x400 or 600x800? Are fonts too small to read? Are they too big? Does all
the text and graphic alignment still work?
23. Modem/connection speeds
Does it take 10 minutes to load a page with a 28.8 modem, but you tested hooked up to a T1? Users will expect
long download times when they are grabbing documents or demos, but not on the front page. Make sure that
the images aren't too large. Make sure that marketing didn't put 50k of font size -6 keywords for search
engines.
Black Box testing for web-based application: (4)
23. Printers
Users like to print. The concept behind the web should save paper and reduce printing, but most people would
rather read on paper than on the screen. So, you need to verify that the pages print properly. Sometimes
images and text align on the screen differently than on the printed page. You need to at least verify that order
confirmation screens can be printed properly.
24. Combinations
Now you get to try combinations. Maybe 600x800 looks good on the MAC but not on the IBM. Maybe IBM with
Netscape works, but not with Lynx.
If the web site will be used internally it might make testing a little easier. If the company has an official web
browser choice, then you just need to verify that it works for that browser. If everyone has a T1 connection,
then you might not need to check load times. (But keep in mind, some people may dial in from home.) With
internal applications, the development team can make disclaimers about system requirements and only support
those systems setups. But, ideally, the site should work on all machines so you don't limit growth and changes
in the future.
25. Load/Stress
You will need to verify that the system can handle a large number of users at the same time, a large amount of
data from each user, and a long period of continuous use. Accessibility is extremely important to users. If they
get a "busy signal", they hang up and call the competition. Not only must the system be checked so your
customers can gain access, but many times crackers will attempt to gain access to a system by overloading it.
For the sake of security, your system needs to know what to do when it's overloaded and not simply blow up.
Many users at the same time
If the site just put up the results of a national lottery, it better be able to handle millions of users right after the
winning numbers are posted. A load test tool would be able to simulate large number of users accessing the
site at the same time.
Large amount of data from each user
Most customers may only order 1-5 books from your new online bookstore, but what if a university bookstore
decides to order 5000 different books? Or what if grandma wants to send a gift to each of her 50 grandchildren
for Christmas (separate mailing addresses for each, of course.) Can your system handle large amounts of data
from a single user?
Long period of continuous use
If the site is intended to take orders for flower deliveries, then it better be able to handle the week before
Mother's Day. If the site offers web-based email, it better be able to run for months or even years, without
downtimes.
You will probably want to use an automated test tool to implement these types of tests, since they are difficult
to do manually. Imagine coordinating 100 people to hit the site at the same time. Now try 100,000 people.
Generally, the tool will pay for itself the second or third time you use it. Once the tool is set up, running another
test is just a click away.
26. Security
Even if you aren't accepting credit card payments, security is very important. The web site will be the only
exposure some customers have to your company. And, if that exposure is a hacked page, they won't feel safe
doing business with you
Black Box testing for web-based application: (5)
27. Directory setup
The most elementary step of web security is proper setup of directories. Each directory should have an
index.html or main.html page so a directory listing doesn't appear.
One company I was consulting for didn't observe this principal. I right clicked on an image and found the path
"...com/objects/images". I went to that directory manually and found a complete listing of the images on that
site. That wasn't too important. Next, I went to the directory below that: "...com/objects" and I hit the jackpot.
There were plenty of goodies, but what caught my eye were the historical pages. They had changed their prices
every month and kept the old pages. I browsed around and could figure out their profit margin and how low
they were willing to go on a contract. If a potential customer did a little browsing first, they would have had a
definite advantage at the bargaining table.
SSL Many sites use SSL for secure transactions. You know you entered an SSL site because there will be a
browser warning and the HTTP in the location field on the browser will change to HTTPS. If your development
group uses SSL you need to make sure there is an alternate page for browser with versions less than 3.0, since
SSL is not compatible with those browsers. You also need to make sure that there are warnings when you enter
and leave the secured site. Is there a timeout limit? What happens if the user tries a transaction after the
timeout?
28 Logins
In order to validate users, several sites require customers to login. This makes it easier for the customer since
they don't have to re-enter personal information every time. You need to verify that the system does not allow
invalid usernames/password and that it does allow valid logins. Is there a maximum number of failed logins
allowed before the server locks out the current user? Is the lockout based on IP? What if the maximum failed
login attempts is three, and you try three, but then enter a valid login? What are the rules for password
selection?
29. Log files
Behind the scenes, you will need to verify that server logs are working properly. Does the log track every
transaction? Does it track unsuccessful login attempts? Does it only track stolen credit card usage? What does it
store for each transaction? IP address? User name?
30. Scripting languages
Scripting languages are a constant source of security holes. The details are different for each language. Some
exploits allow access to the root directory. Others allow access to the mail server. Find out what scripting
languages are being used and research the loopholes. It might also be a good idea to subscribe to a security
newsgroup that discusses the language you will be testing.
31. Web Server Testing Features
• Feature: Definition
• Transactions: The nunber of times the test script requested the current URL
• Bytes transferred: The total number of bytes sent or received, less HTTP headers
• Response time: The average time it took for the server to respond to each individual request.
• Transaction rate: The average number of transactions the server was able to handle per second.
• Concurrency: The average number of simultaneous connections the server was able to handle during
the test session.
• Status code nnn: This indicates how many times a particular HTTP status code was seen.
Data Conditions for Test transfers Test data row retrieval - Different for list List window with no data
Window Transfer with general and transfer functions windows List window one record in list
Functions (record level) using data vs. one record (row)
data conditions display List window >1 row -
windows last row
List window >1 row -
not first or last row
One row display window
Select inquiry entity in list window
(not from list)
Lists of Columns
Single Row Display
Drop Down List Box-
Contents
Verify Window Display Verify inquiry Tests stored procedure/ Drop Down List Box -
Data data displays GUI retrieval of data Selection Retrieval
Specific Data Retrieval
Conditions- Max, Null, etc.
Field Edit Formats
New
Test data row Test stored Note: do an inquiry Change to non-key field
handling from procedure/GUI after update to Change to key field
Row Data Maintenance
GUI to add/change/delete verify database (delete and add)
database functions update Delete
Micro help
Balloon Notes
Help- Index
Application HELP Help-Table of Contents
Help-Jump Words
Help-Text
Job Status
Online Report/s
Informational Windows - Content
Miscellaneous
Application Specific Informational
Windows - Button
Fatal Application Errors
Section 1 - Windows Compliance Testing
1.1. Application
Start Application by Double Clicking on its ICON. The Loading message should show the application name,
version number, and a bigger pictorial representation of the icon (a 'splash' screen).
No Login is necessary
The main window of the application should have the same caption as the caption of the icon in Program
Manager.
Closing the application should result in an "Are you Sure" message box
Attempt to start application Twice
This should not be allowed - you should be returned to main Window
Try to start the application twice as it is loading.
On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass
(e.g. alpha access enquiries) then some enquiry in progress message should be displayed.
All screens should have a Help button, F1 should work doing the same.
If the screen has an Control menu, then use all ungreyed options. (see below)
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar.
If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.
Enter text into Box
Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W.
Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.
SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should
select all text in box.
1.4. Option (Radio Buttons)
Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse by clicking.
1.5. Check Boxes
Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same.
If Command Button leads to another Screen, and if the user can enter or change details on the other screen
then
the Text on the button should be followed by three dots.
All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a letter
underlined
in the button text. The button should be activated by pressing ALT+Letter. Make sure there is no duplication.
Click each button once with the mouse - This should activate
Tab to each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate
The above are VERY IMPORTANT, and should be done for EVERY command Button.
Tab to another type of control (not a command button). One button on the screen should be default (indicated
by
a thick black border). Pressing Return in ANY no command button control should activate it.
If there is a Cancel Button on the screen , then pressing <Esc> should activate it.
If pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a
message
phrased positively with Yes/No answers where Yes results in the completion of the action.
1.7. Drop Down List Boxes
Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type text
in the box.
Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing ‘Ctrl - F4’
should open/drop down the list box.
Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical
order with the exception of blank/none which is at the top or the bottom of the list box.
Drop down with the item selected should be display the list with the selected item on the top.
Make sure only one space appears, shouldn't have a blank line at the bottom.
1.8. Combo Boxes
Should allow text to be entered. Clicking Arrow should allow user to choose from list
1.9. List Boxes
Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys.
Pressing a letter should take you to the first item in the list starting with that letter.
If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box, should act
in the same way as selecting and item in the list box, then clicking the command button.
Force the scroll bar to appear, make sure all the data can be seen in the box.
GUI Testing Checklist (2)
19. Where the database requires a value (other than null) then this should be defaulted into fields. The
user must either enter an alternative valid value or leave the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.
General Conditions:
1. Assure the existence of the "Help" menu.
2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative (hot-key) key sequence which will invoke it where
appropriate.
5. In drop down list boxes, ensure that the names are not abbreviations / cut short
6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key /
hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and
generates a caution message "Changes will be lost - Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates as a Close button when changes have be made that cannot be
undone.
11. Assure that only command buttons which are used by a particular window, or in a particular dialog box, are
present. - i.e. make sure they don't work on the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assure that it is grayed out when it
should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system
users.
16. Assure that command buttons are all of similar size and shape, and same font & font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value (command button, or other object)
which is invoked when the Enter key is pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button which makes sense according to the function of the
window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names meaningful to system
users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same
window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly
demarcated areas "Group Box"
26. Assure that the Tab key sequence which traverses the screens does so in a logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals are red-green color
blind).
29. Assure that the user will have control of the desktop with respect to general color and highlighting (the
application should not dictate the desktop background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the
error. (i.e the tab is opened, highlighting the field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not
open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate),
generating "changes will be lost" message if necessary.
43. Micro help text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open
• Note: The following keys are used in some windows applications, and are included as a guide.
• 3.3. Control Shortcut Keys
• * These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these operations.
• * These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these operations.
Key Function
CTRL + Z Undo
CTRL + X Cut
CTRL + C Copy
CTRL + V Paste
CTRL + N New
CTRL + O Open
CTRL + P Print
CTRL + S Save
CTRL + B Bold*
CTRL + I Italic*
CTRL + U Underline*
• * These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these operations.
Checklist: Numeric Entry
The following edits, questions, and checks should be considered for all numeric fields.
Other Tests
• Overflow
• Underflow
• Rounding
• Floating Point Errors
Formats
• Currency (Symbol, Separators, Commas
& Period)
• Input
• Storage
• Output
• Display
• Print
• Integer (16, 32, 64)
• Floating Point
• Binary
• Packed
• Hex, Octal, Scientific Notation
• Placement of Negative Indicator
• -, CB, ( ) Leading or Trailing
• Word Boundaries
Attributes
• Position (Display or Print)
• Color (Red for Negative)
• Intensity
• Blinking
• Font Size
• Italics
Zero
• Leading 0123
• Trailing 123.0
• Absent 123.
Alternative Formats
• Display values in thousands or millions
• Display as words "One Thousand"
• Roman numerals
Error Message
• When displayed
• Where displayed
• Should they be acknowledged?
• Automatic Recovery?
Initialization
• Starting Value
• Null Value
• Reset Value
Reasonableness Checks .
Entry Format
• Character
• Numeric
Display Issues
• Blank on either side of field to prevent
touching another field
• 123 123 vs. 123123
• Display with leading zero
Source of Value
• Will it change?
• Multiple?
• Alternative Source
Encrypted storage .
Validation
• Table
• Computation
• Other report
Naming Conventions .
Compiler Requirements .
Note the edits that are performed by the programming language, tests that should be handled during unit
testing, and checks that should be done via integration or system testing.
Other issues:
1. Will boundaries and limits change over time?
2. Are they influenced by something else?
3. Will field accept operators? +, -, /, *, !, **, ^, %
Other issues:
1. Can invalid dates be passed to this routine? Should they be accepted?
2. Is there a standard date entry routine in the library?
3. Can new date formats be easily added and edited?
4. What is the source of the date: input documents, calendar on the wall, or field on another document?
5. Are there other mechanisms to change dates outside of this program?
6. Is this a date and time field?
7. Checklist: Developing Windows Application
8. Modal Windows - Often times modal windows which must be acted upon end up hidden behind standard
windows. This gives the user the impression that the system has locked up. Special Characters - Special
characters may not be used on some windows entry screens, there also may be some conflicts with
converting data or using data from other systems. Printer Configuration - Although Windows is designed
to handle the printer setup for most applications, there are formatting differences between printers and
printer types. LaserJet printers do not behave the same as inkjets, nor do 300, 600, or 1200 DPI laser
printers behave the same across platforms. Date Formats - The varying date formats sometimes cause
troubles when they are being displayed in windows entry screens. This situation could occur when programs
are designed to handle a YY/MM/DD format and the date format being used is YYYY/MMM/DD. Screen
Savers - Some screen savers such as After Dark are memory or resource ‘hogs’ and have been known to
cause troubles when running other applications. Speed Keys - Verify that there are no conflicting speed
keys on the various screens. This is especially important on screens where the buttons change.
9. Virus Protection Software - Some virus protection software can be configured too strictly. This may
cause applications to run slowly or incorrectly. Disk Compression Tools - Some disk compression
software may cause our applications to run slowly or incorrectly. Multiple Open Windows - How does the
system handle having multiple open windows, are there any resource errors. Test Multiple
Environments - Programs need to be tested under multiple configurations. The configurations seem to
cause various results. Test Multiple Operating Systems - Programs running under Win 95, Win NT, and
Windows 3.11 do not behave the same in all environments. Corrupted DLL’s - Corrupted DLL’s will
sometime cause applications not to execute or more damaging to run sporadically. Incorrect DLL
Versions - Corrupted DLL’s will sometime cause our applications not to execute or more damaging to run
sporadically. Missing DLL’s - Missing DLL’s will usually cause our applications not to execute. Standard
Program Look & Feel - The basic windows look & feel should be consistent across all windows and the
entire application. Windows buttons, windows and controls should follow the same standards for sizes.Tab
Order - When pressing the TAB key to change focus from object to object the procession should be logical.
Completion of Edits - The program should force the completion of edits for any screen before users have
a change to exit program. Saving Screen Sizes - Does the user have an opportunity to save the current
screen sizes and position? Operational Speed - Make sure that the system operates at a functional speed,
databases, retrieval, and external references. Testing Under Loaded Environments - Testing system
functions when running various software programs "RESOURCE HOGS" (MS Word, MS Excel, WP, etc.).
Resource Monitors - Resource monitors help track Windows resources which when expended will cause
GPF’s. Video Settings - Programmers tend to program at a 800 x 600 or higher resolution, when you run
these programs at a default 640 x 480 it tends to overfill the screen. Make sure the application is designed
for the resolution used by customers. Clicking on Objects Multiple Times - Will you get multiple
instances of the same object or window with multiple clicks? Saving Column Orders - Can the user save
the orders of columns of the display windows? Displaying Messages saying that the system is
processing - When doing system processing do we display some information stating what the system is
doing? Clicking on Other Objects While the System is Processing - Is processing interrupted? Do
unexpected events occur after processing finishes? Large Fonts / Small Fonts - When switching between
windows font sizes mixed results occur when designing in one mode and executing in another. Maximizing
/ Minimizing all windows - Do the actual screen elements resize? Do we use all of the available screen
space when the screen is maximized. Setup Program - Does your setup program function correctly across
multiple OS’s. Does the program prompt the user before overwriting existing files. Consistency in
Operation - Consistent behavior of the program in all screens and the overall application. Multiple Copies
of the same Window - Can the program handle multiple copies of the same window? Can all of these
windows be edited concurrently? Confirmation of Deletes - All deletes should require confirmations of the
process before execution. Selecting alternative language options - Will your program handle the use of
other languages (FRENCH, SPANISH, ITALIAN, etc.)
(1) They're testing software. Without knowing programming, they can't have any real insights into the kinds of
bugs that come into software and the likeliest place to find them. There's never enough time to test
"completely", so all software testing is a compromise between available resources and thoroughness. The tester
must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don't know
programming, you're unlikely to have useful intuition about where to look.
(2) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The
tools, both as testing products and as mental disciplines, all presume programming knowledge. Without
programmer training, most test techniques (and the tools based on those techniques) are unavailable. The
tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most
simplistic tools.
Taking entry-level programmers and putting them into a test organization is not a good idea because:
A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill,
research has shown, also correlates well with programmer and tester aptitude. This skill is very similar to the
kind of unresolved chaos with which the tester must daily deal. Here's the theory behind the notion. If you do a
crossword puzzle in ink, you can't put down a word, or even part of a word, until you have confirmed it by a
compatible cross-word. So you keep a dozen tentative entries unmarked and when by some process or another,
you realize that there is a compatible cross-word, you enter them both. You keep score by how many
corrections you have to make-not by merely finishing the puzzle, because that's a given. I've done many
informal polls of this aptitude at my seminars and found a much higher percentage of crossword-puzzles-in-ink
aficionados than you'd get in a normal population.
6. People Skills.
Here's another area in which testers and programmers can differ. You can be an effective programmer even if
you are hostile and anti-social; that won't work for a tester. Testers can take a lot of abuse from outraged
programmers. A sense of humor and a thick skin will help the tester survive. Testers may have to be diplomatic
when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a ready smile-all work to the
independent tester's advantage. This may explain one of the (good) reasons that there are so many women in
testing. Women are generally acknowledged to have more highly developed people skills than comparable men-
whether it is something innate on the X chromosome as some people contend or whether it is that without
superior people skills women are unlikely to make it through engineering school and into an engineering career,
I don't know and won't attempt to say. But the fact is there and those sharply-honed people skills are important
7. Tenacity.
An ability to reach compromises and consensus can be at the expense of tenacity. That's the other side of the
people skills. Being socially smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can
walk all over. The best testers are both-socially adept and tenacious where it matters. The best testers are so
skillful at it that the programmer never realizes that they've been had. Tenacious-my picture is that of an angry
pitbull fastened on a burglar's rear-end. Good testers don You can't intimidate them-even by pulling rank.
They'll need high-level backing, of course, if they're to get you the quality your product and market demands.
8. Organized.
I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers
use files, data bases, and all the other accouterments of an organized mind. They make up checklists to keep
themselves on track. They recognize that they too can make mistakes, so they double-check their findings.
They have the facts and figures to support their position. When they claim that there's a bug-believe it, because
if the developers don't, the tester will flood them with well-organized, overwhelming, evidence.
A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and
editor, I've learned that the inability to express oneself clearly in writing is often symptomatic of a disorganized
mind. I don't mean that we expect everyone to write deathless prose like a Hemingway or Melville. Good
technical writing is well-organized, clear, and straightforward: and it doesn't depend on a 500,000 word
vocabulary. True, there are some unfortunate individuals who express themselves superbly in writing but fall
apart in an oral presentation- but they are typically a pathological exception. Usually, a well-organized mind
results in clear (even if not inspired) writing and clear writing can usually be transformed through training into
good oral presentation skills.
9. Skeptical.
That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all
is fit to be questioned. Only tangible evidence in documents, specifications, code, and test results matter. While
they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I know where
the bugs are.")-and do it with a smile-they ignore all such in-substantive assurances.
10. Self-Sufficient and Tough.
If they need love, they don't expect to get it on the job. They can't be looking for the interaction between them
and programmers as a source of ego-gratification and/or nurturing. Their ego is gratified by finding bugs, with
few misgivings about the pain (in the programmers) that such finding might engender. In this respect, they
must practice very tough love.
11. Cunning.
Or as Gruenberger put it, "low cunning." "Street wise" is another good descriptor, as are insidious, devious,
diabolical, fiendish, contriving, treacherous, wily, canny, and underhanded. Systematic test techniques such as
syntax testing and automatic test generators have reduced the need for such cunning, but the need is still with
us and undoubtedly always will be because it will never be possible to systematize all aspects of testing. There
will always be room for that offbeat kind of thinking that will lead to a test case that exposes a really bad bug.
But this can be taken to extremes and is certainly not a substitute for the use of systematic test techniques.
The cunning comes into play after all the automatically generated "sadistic" tests have been executed.
12. Technology Hungry.
They hate dull, repetitive, work-they'll do it for a while if they have to, but not for long. The silliest thing for a
human to do, in their mind, is to pound on a keyboard when they're surrounded by computers. They have a
clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll
f ind ways to eliminate all such error-prone procedures. I've seen excellent testers re-invent the
capture/playback tool many times. I've seen dozens of home-brew test data generators. I've seen excellent test
design automation done with nothing more than a word processor, or earlier, with a copy machine and lots of
bottles of white-out. I've yet to meet a tester who wasn't hungry for applicable technology. When asked why
didn't they automate such and such-the answer was never "I like to do it by hand." It was always one of the
following: (1) "I didn't know that it could be automated", (2) "I didn't know that such tools existed", or worst of
all, (3) "Management wouldn't give me the time to learn how to use the tool."
13. Honest.
Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously
agonize over it. This fundamental honesty extends to a brutally realistic understanding of their own limitations
as a human being. They accept the idea that they are no better and no worse, and therefore no less error-
prone than their programming counterparts. So they apply the same kind of self-assessment procedures that
good programmers will. They'll do test inspections just like programmers do code inspections. The greatest
possible crime in a tester's eye is to fake test results.
Personal Requirements For Software Quality Assurance Engineers
Challenges
Rapidly changing requirements
Foresee defects that are likely to happen in production
Monitor and Improve the software development processes
Ensure that standards and procedures are being followed
Customer Satisfaction and confidence
Compete the Market
Identifying Software Quality Assurance Personnel Needs:
Requirement Specification
Functional Specification
Technical Specification
Standards document and user manuals – If applicable (e.g. Coding standards document)
Test Environment Setup
Professional Characteristics of a good SQA Engineer
Understanding of business approach and goals of the organization
Understanding of entire software development process
Strong desire for quality
Establish and enforce SQA methodologies, processes and Testing Strategies
Judgment skills to assess high-risk areas of application
Communication with Analysis and Development team
Report defects with full evidence
Take preventive actions
Take actions for Continuous improvement
Reports to higher management
Say No when Quality is insufficient
Work Management
Meet deadlines
Personal Characteristics of a good SQA Engineer
Open Minded
Observant
Perceptive
Tenacious
Decisive
Diplomatic
Keen for further training/trends in QA