What Makes A Good Software Test Engineer?: Return To Top of This Page's FAQ List
What Makes A Good Software Test Engineer?: Return To Top of This Page's FAQ List
What Makes A Good Software Test Engineer?: Return To Top of This Page's FAQ List
A good test engineer has a 'test to break' attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy
are useful in maintaining a cooperative relationship with developers, and an ability to
communicate with both technical (developers) and non-technical (customers,
management) people is useful. Previous software development experience can be helpful
as it provides a deeper understanding of the software development process, gives the
tester an appreciation for the developers' point of view, and reduce the learning curve in
automated test tool programming. Judgement skills are needed to assess high-risk areas
of an application on which to focus testing efforts when time is limited.
In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests.
Without such documentation, there will be no clear-cut way to determine if a software
application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and cooperation
between programmers and customers/end-users to iteratively develop requirements. In
the XP 'test first' approach developmers create automated unit testing code before the
application code, and these automated unit tests essentially embody the requirements.
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test
plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production
systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
(See the Bookstore section's 'Software Testing' and 'Software QA' categories for useful
books with more information.)
• A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test
case should contain particulars such as test case identifier, test case name,
objective, test conditions/setup, input data requirements, steps, and expected
results.
• Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking
through the operation of the application. For this reason, it's useful to prepare test
cases early in the development cycle if possible.
• Complete information such that developers can understand the bug, get an idea of
it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if
the developer doesn't have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
Also see 'Who should decide when software is ready to be released?' in the LFAQ
section.
• What are the expected loads on the server (e.g., number of hits per unit time?),
and what kind of performance is required under such loads (such as web server
response time, database query response times). What kinds of tools will be needed
for performance testing (such as web load testing tools, other tools already in
house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind
of connection speeds will they by using? Are they intra- organization (thus with
likely high connection speeds and similar browsers) or Internet-wide (thus with a
wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how
much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required
and what is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that
affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and
what are the requirements for maintaining, tracking, and controlling page content,
graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations
will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
• How extensive or customized are the server logging and reporting requirements;
are they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
Some usability guidelines to consider - these are subjective and may or may not apply to
a given situation (Note: more information on usability testing issues can be found in
articles about web site usability in the 'Other Resources' section):
• Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so
that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided
or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end
pages.
• The page owner, revision date, and a link to a contact person or organization
should be included on each page.
Many new web site test tools have appeared in the recent years and more than 290 of
them are listed in the 'Web Test Tools' section.
Return to top of this page's FAQ list