Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Complementary: Integration

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

System and Acceptance Testing 131

Svstem testing is highly


component and
integrationcomplementary
functional specification and test phases are
to other phases of
testing. The
conducted taking inputs from
design.
phases are technology and product The main focus during these testing
customer scenarios and Usage
Thus
implementation.
patterns serve as the
On the other hand,
basis
system testing phase complements the for system testing.
focus on customers. The system earlier phases with an explicit
testing phase
of the product
the product.
developrment team towards helps in switching this focus
customers and their use of
To summarize, system testing is done for the following reasons.
1 Provide independent perspective in testing
2. Bring in customer perspective in testing
3. Provide a "fresh pair of eyes" to
discover defects not found earlier
by testing
4. Test product behavior in a
holistic, complete,and realistic environment
5. Test both functional and
non-functional aspects of the product
6. Build confidence in the product
7. Analyze and reduce the risk of releasing the product
8. Ensure all requirements are met and ready the product for acce
ptance testing.

13 FUNCTIONAL VERSUS NON-FUNCTIONAL TESTING


Functional testing involves testing a product's functionality and features.
Non-functional testing involves testing the product's quality factors. System
testing comprises both functional and non-functional test verification.
Functional testing helps in verifying what the system is supposed to
functionality. It has only two
do. It aids in testing the product's features orconcerned-met or not met. If
results as far as requirements fulfilment is
requirements may
requirements are not properly enumerated, functional should have very
testing
be understood in many ways. Hence, functional behavior of the product.
clear expected results documented in terms of the steps to execute the test
Functional testing comprises simple methods and the product, not on the
cases. Functional testing results normally depend on and configuration
environment. It uses a pre-determined set of resources
compatibility testing where
except for a few types of testing such as Chapter 4. Functional testing
configurations play a role, as explained inknowledge as well as domain
requires in-depth customer and product cases and find critical defects, as the
knowledge so as to developdifferent test
functional testing normally
focus of the testing is to find detects. Failures inbehavior. Functional testing
result in fixes in the code to arrive at the right
component testing.
is performed in all phases of testing such as unit testing,
132 Software Testing
Having said that. the functional
integration testing, and system testing.(functional
phase system testing) focuses
testing done in the system testing component teatures and interface features
on product features as against
phases of testing.
that get focused on in earlier
Non-functional testing is performed to verify the quality factors (such .
scalability etc.). These quality tactors are also called non-function.
reliability, requires the expected results to k
requirements. Non-functional testing
quantifiable terms. Non--functional testing
documented in qualitative and
the results are different for differ
requires large anmountof resources and testing is very complex due
conigurations and resources. Non-functional collected and analyzed. The
to be
to the large amount of data that needs the product and is not meant to
non-functional testing is to qualify
focus on non-functional testing include
be a defect-finding exercise. Test cases for concluded both on pass/fail
Elear pass/fail criteria. However, test results are running the tests.
definitions and on the experiences encountered in
non-functional tests results
Apart from verifying the pass or fail status, executing them and
are also determined by the amount of effort involved in
if a performance test
any problems faced during execution. For example, experience
met the pass/fail criteria after 10 iterations, then the is bad and
test result cannot be taken as pass. Either the product or the non-functional
testing process needs to be fixed here.
Non-functional testing requires understanding the product behavior
design, and architecture and also knowing what the competition provides
It also requires analytical and statistical skills as the large amount of data
generated requires careful analysis. Failures in non-functional testing affect
the design and architecture much more than the product code. Since nor
functional testing is not repetitive in nature and requires a stable product, t
is perfornmed in the system testing phase.
The points discussed in the above paragraphs are summarized n
Table 6.1.

Some of the points mentioned in Table 6.1 may be seen as judgmental


and subjective. For example, design and architecture
for functíonal testing also. Hence all the above pointsknowledge is needed
have to be taken as
guidelines, not dogmatic rules.
Since both functional and
non-tunctional aspects are being tested in
the system testing phase, the question that can be asked is "What is the
right proportion of test cases/effort for these two types of Since
functional testing is a focus area starting from the unit testing?" while
testing phase
non-functional aspects get tested only in the system testing
good idea that a majority of system testing effort be phase, it is a
functional aspects. A70%-30% ratio between focused on the no
non-functional
testing can be considered good and 50%-50% ratio is a
good
and functiona
However, this is only aguideline, and the right ratio depends starting
point
more On the
context, type of release, requirements, and products.
System and Acceptance Testing 133

6.1 Functional testing versus non-functional testing.


Testing aspects Functional testing Non-functional testing
Involves Product features and
Quality factors
functionality
Product behavior
Tests Behavior and experience
Result conclusion Simple steps written to Huge datacollected and
check expected results analyzed
Results varies due to Product implementation Product implementation,
resources, and configurations
Testing focus Defect detection Qualification of product
Knowledge required Product and domain Product, domain, design,
architecture, statistical skills
Failures normally due to Code Architecture, design, and code
Testing phase Unit, component, integration, System
system
Test case repeatability Repeated many times Repeated only in case of failures
and for different configurations
One-timne setup for a set of Configuration changes for each
Configsration test cases test case

64 FUNCTIONAL SYSTEM TESTING


at different phases
As explained earlier, functional testing is performedfunctional testing is
and the focus is on product level features. As problems. One
performed at various testing phases, there are two obvious
Duplication refers to the same tests
is duplication and other one is gray area.
being performed multiple times and graypercentage area refers to certain tests being
of duplication across
missed out in all the phases., A small teams are involved. Performing cross
phases is unavoidable as different
(involving teams from earlier phases of testing) and looking at
reviews cases can
test:cases of the previous phase before writing system test
the small percentage of duplication is
minimizing the duplication. A features with
help in different teams test the
advisable, as different people from defects.
new
different perspectives, yielding
happen due to lack of product knowledge, lack of
Gray areas in testing and lack of co-ordination across test teams,
knowledge of customer usage, seep through and impact customer
make defects assume
Such gray areas intesting
performing_ a particular phase oftesting may
usage. A test team performed by the next phase. This is one of the
willbe guideline
that a particular test areas. In such cases, there has to be aclear
reasons for such gray for the tests at the earliest possible phase. A test
plan
Tor team interaction to
134 Software Testing
earlier phase is a better alternative tha
case moved from alater phase to an to a later phase, as the purpoe.
delaying atest case from an earlier phase
done a.
testing is to find defects as early as possible. This has to be the to
of phase, without diluting
completing all tests meant for the current
of the current phase. a.
multiple ways system functional testing isperformed. There
There are cases are derived for functional
testin.
test
also many ways product level given below.
Some of the common techniques are
1 Design/architecture verification
2. Business vertical testing
3. Deployment testing
4. Beta testing
compliance.
5. Certification,standards, and testing for
6.4.1 Design/Architecture Verification
developed and
In this method of functional testing, the test cases are
actua
checked against the design and architecture to see whether they are
product-level test cases.Comparing thiswith integration testing, the test case
for integration testing are created by looking at interfaces whereas systen
level test cases are created first and verified with design and architecture
to check whether they are product-level or component-level test cases. The
integration test cases focus oninteractions between modules or component
whereas the functional system test focuses on the behavior of the complet:
product. A side benefit of this exercise is ensuring completeness of th
product implementation. This technique helps in validating the produd
features that are written based on customer scenarios and verifying them
using product implementation. If there is a test case that is a custome
scenario but failed validation using this technique, then it is mov
appropriately to component or integration testing phases. Since functiond
testing is performed at various test phases, it is important to reject the te:
cases and move them to an earlier phase to catch defects early and avo"
any major surprise at a later phases. Some of the guidelines used to reje
test cases for system functional testing include the following.
1. Is this focusing on code logic, data
structures, and unit of
product? (Ifyes, then it belorngsto unit testing.)
2. Is this specified in the functional
(If yes, then it belongs to specification of any componer
3. Is this specified in designcomponent testing.)
and architecture specification for in
gration testing? (If yes, then it belongs to
4. Is it focusing on product integration testing.)
mers? (This is focusing on implementation but not visible to cust"
component/integration implementation-to
testing.)
be covered in un'
5 Is it the right mix of
customer usage and product
(Customer usage isa prerequisite for system testing.) implementatio
5.2 Performance Testing
Performance specifications (requirements) are documented in a performance test plan. Ideally, this is done
during the requirements development phase of any system development project, prior to any design effort.
Performance specifications (requirements) should ask the following questions, at a minimum:
 In detail, what is the performance test scope? What subsystems, interfaces, components, etc. are in and
out of the scope for this test?
 For the user interfaces (UIs) involved, how many concurrent users are expected for each (specify peak
vs. nominal)?
 What does the target system (hardware) look like (specify all server and network appliance
configurations)?
 What are the time requirements for any/all backend batch processes (specify peak vs. nominal)?
A system along with its functional requirements, must meet the quality requirements. One example
of quality requirement is performance level. The users may have objectives for a software system in terms
of memory use, response time, throughput, and delays. Thus, performance testing is to test the run-time
performance of the software on the basis of various performance factors. Performance testing becomes
important for real-time embedded
systems, as they demand critical performance requirements.
This testing requires that performance requirements must be clearly mentioned in the SRS and
system test plans. The important thing is that these requirements must be quantified. For example, a
requirement that the system
should return a response to a query in a reasonable amount of time is not an acceptable requirement; the
time must be specified in a quantitative way. In another example, for a Web application, you need to know
at least two things:
(a) expected load in terms of concurrent users or HTTP connections and
(b) acceptable response time.
Performance testing is often used as a part of the process of performance profile tuning. The goal
is to identify the ‘weakest links’—the system often carries a number of parts which, with a little tweak,
can significantly improve the overall performance of the system. It is sometimes difficult to identify which
parts of the system represent the critical paths. To help identify critical paths, some test tools include (or
have as add-ons) instrumentation agents that run on the server and report transaction times, database
access times, network overhead, and other server monitors. Without such instrumentation, the use of
primitive system tools may be required (e.g. Task Manager in Microsoft Windows). Performance testing
usually concludes that it is the software (rather than the hardware) that contribute most to delays
(bottlenecks) in data processing.

5.3 Stress Testing


Stress testing is also a type of load testing, but the difference is that the system is put under loads beyond
the limits so that the system breaks. Thus, stress testing tries to break the system under test by
overwhelming its resources in order to find the circumstances under which it will crash. The areas that
may be stressed in a system are:
 Input transactions
 Disk space
 Output
 Communications
 Interaction with users
Stress testing is important for real-time systems where unpredictable events may occur, resulting in
input loads that exceed the values described in the specification, and the system cannot afford to fail due
to maximum load on resources. Therefore, in real-time systems, the entire threshold values and system
limits must be noted carefully. Then, the system must be stress-tested on each individual value.
Stress testing demands the amount of time it takes to prepare for the test and the amount of
resources consumed during the actual execution of the test. Therefore, stress testing cost must be weighed
against the risk of not identifying volume-related failures. For example, many of the defence systems
which are real-time in nature, demand the systems to perform continuously in warfare conditions. Thus,
for a real-time defence system, we must stress-test the system; otherwise, there may be loss of equipment
as well as life.
5.4 Configuration Testing
Diversity in configuration for web applications makes the testing of these systems very difficult. As
discussed above, there may be various types of browsers supporting different operating systems,
variations in servers, networks, etc. Therefore, configuration testing becomes important so that there is
compatibility between various available resources and application software. The testers must consider
these configurations and compatibility issues so that they can design the test cases incorporating all the
configurations. Some points to be careful about while testing configuration are:
1. There are a number of different browsers and browser options. The web application has to be
designed to be compatible for majority of the browsers.
2. The graphics and other objects on a website have to be tested on multiple browsers. If more than
one browser will be supported, then the graphics have to be visually checked for differences in
the physical appearance. Some of the things to check are centering of objects, table layouts,
colours, monitor resolution, forms, and buttons.
3. The code that executes from the browser also has to be tested. There are different versions of
HTML. They are similar in some ways but they have different tags which may produce different
features. Some of the other codes to be tested are Java, JavaScript, ActiveX, VBscripts, Cgi-Bin
Scripts, and Database access. Cgi-Bin Scripts have to be checked for end-to-end operations and
is most essential for e-commerce sites. The same goes for database access.
4. All new technologies used in the web development like graphics designs, interface calls like
different API’s, may not be available in all the operating systems. Test your web application on
different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

5.5 Security Testing


Today, the web applications store more vital data and the number of transactions on the web has
increased tremendously with the increasing number of users. Therefore, in the Internet environment, the
most challenging issue is to
protect the web applications from hackers, crackers, spoofers, virus launchers, etc. Through security
testing, we try to ensure that data on the web applications remain confidential, i.e. there is no
unauthorized access. Security testing also ensures that users can perform only those tasks that they are
authorized to perform.
In a web application, the risk of attack is multifold, i.e. it can be on the web software, client-side
environment, network communications, and server-side environments. Therefore, web application
security is particularly important because these are generally accessible to more users than the desktop
applications. Often, they are accessed from different locations, using different systems and browsers,
exposing them to different security issues, especially external attacks. It means that web applications
must be designed and developed such that they are able to nullify any attack from outside. Therefore,
this issue is also related to testing of web application in terms of security. We need to design the test
cases such that the application passes the security test.
Security Test Plan
Security testing can be divided into two categories: testing the security of the infrastructure hosting the
web application and testing for vulnerabilities of the web application. Firewalls and port scans can be
the solution for security of infrastructure. For vulnerabilities, user authentication, restricted and
encrypted use of cookies, data communicated must be planned. Moreover, users should not be able to
browse through the directories in the server.
Planning for security testing can be done with the help of some threat models. These models may
be prepared at the time of requirement gathering and test plan can also be prepared correspondingly.
These threat models will help in incorporating security issues in designing and later can also help in
security testing.
Find all the component interfaces for performing security testing on a component. This is
because most of the security bugs can be found on the interfaces only. The interfaces are then prioritized
according to their level of vulnerability. The high-priority interfaces are tested thoroughly by injecting
mutated data to be accessed by that interface in order to check the security.
While performing security testing, the testers should take care that they do not modify the
configuration of the application or the server, services running on the server, and existing user or
customer data hosted by the application.
Various Threat Types and their corresponding Test cases
a) Unauthorized user/Fake identity/Password cracking: When an unauthorized user tries to access the
software by using fake identity, then security testing should be done such that any unauthorized user
is not able to see the contents/data in the software.
b) Cross-site scripting (XSS): When a user inserts HTML/client-side script in the user interface of a
web application and this insertion is visible to other users, it is called cross-site scripting (XSS).
Attacker can use this method to execute malicious script or URL on the victim’s browser. Using
cross-site scripting,
c) attacker can use scripts like JavaScript to steal user cookies and information stored in the cookies. To
avoid this, the tester should additionally check the web application for XSS.
d) Buffer overflows: Buffer overflow is another big problem when handling memory allocation if there
is no overflow check on the client software. Due to this problem, malicious code can be executed by
the hackers. In the application, check the buffer overflow module and the different ways of submitting
a range of lengths to the application.
e) URL manipulation: There may be chances that communication through HTTP is also not safe. The
web application uses the HTTP GET method to pass information between the client and the server.
The information is passed in parameters in the query string. Again, the attacker may change some
information in query string passed from GET request so that he may get some information or corrupt
the data. When one attempts to modify the data, it is known as fiddling of data. The tester should
check if the application passes important information in the query string and design the test cases
correspondingly. Write the test cases such that a general user tries to modify the private information.
f) SQL injection: Hackers can also put some SQL statements through the web application user interface
into some queries meant for querying the database. In this way, he can get vital information from the
server database. Even if the attacker is successful in crashing the application, from the SQL query
error shown on the browser, the attacker can get the information they are looking for. Design the test
cases such that special characters from user inputs should be handled/escaped properly in such cases.
g) Denial of service: When a service does not respond, it is denial of service. There are several ways
that can make an application fail. For example, heavy load put on the application, distorted data that
may crash the application, overloading of memory, etc. Design the test cases considering all these
factors.

5.6 Recovery Testing


Recovery is just like the exception handling feature of a programming language. It is the ability of a
system to restart operations after the integrity of the application has been lost. It reverts to a point where
the system was functioning correctly and then, reprocesses the transactions to the point of failure.
Some software systems (e.g. operating system, database management systems, etc.) must recover
from programming errors, hardware failures, data errors, or any disaster in the system. So the purpose
of this type of system testing is to show that these recovery functions do not work correctly.
The main purpose of this test is to determine how good the developed software is when it faces
a disaster. Disaster can be anything from unplugging the system which is running the software from
power, network etc., also stopping the database, or crashing the developed software itself. Thus,
recovery testing is the activity of testing how well the software is able to recover from crashes,
hardware failures, and other similar problems. It is the forced failure of the software in various ways
to verify that the recovery is properly performed. Some examples of recovery testing are given below:
1) While the application is running, suddenly restart the computer and thereafter, check the validity
of application’s data integrity.
2) While the application receives data from the network, unplug the cable and plug-in after a while,
and analyse the application’s ability to continue receiving data from that point, when the network
connection disappeared.
3) Restart the system while the browser has a definite number of sessions and after rebooting, check
that it is able to recover all of them.
Recovery tests would determine if the system can return to a well-known state, and that no
transactions have been compromised. Systems with automated recovery are designed for this purpose.
There can be provision of multiple CPUs and/or multiple instances of devices, and mechanisms to detect
the failure of a device. A ‘checkpoint’ system can also be put that meticulously records transactions and
system states periodically to preserve them in case of failure. This information allows the system to
return to a known state after the failure.
Beizer proposes that testers should work on the following areas during recovery testing:
 Restart If there is a failure and we want to recover and start again, then first the current system
state and transaction states are discarded. Following the criteria of checkpoints as discussed
above, the most recent checkpoint record is retrieved and the system is initialized to the states in
the checkpoint record. Thus, by using checkpoints, a system can be recovered and started again
from a new state. Testers must ensure that all transactions have been reconstructed correctly and
that all devices are in proper states. The system now is in a position to begin to process new
transactions.
 Switchover Recovery can also be done if there are standby components and in case of failure of
one component, the standby takes over the control. The ability of the system to switch to a new
component must be tested. A good way to perform recovery testing is under maximum load.
Maximum load would give rise to transaction inaccuracies and the system would crash, resulting
in defects and design flaws.

5.7 Regression Testing


In this category, new tests are not designed. Instead, test cases are selected from the existing pool and
executed to ensure that nothing is broken in the new version of the software. The main idea in regression
testing is to verify that no defect has been introduced into the unchanged portion of a system due to
changes made elsewhere in the system. During system testing, many defects are revealed and the code
is modified to fix those defects. As a result of modifying the code, one of four different scenarios can
occur for each fix:
 The reported defect is fixed.
 The reported defect could not be fixed in spite of making an effort.
 The reported defect has been fixed, but something that used to work before has been failing.
 The reported defect could not be fixed in spite of an effort, and something that used to work
before has been failing.
Given the above four possibilities, it appears straightforward to re-execute every test case from version
n −1 to version n before testing anything new. Such a full test of a system may be prohibitively
expensive. Moreover, new software versions often feature many new functionalities in addition to the
defect fixes. Therefore, regression tests would take time away from testing new code. Regression testing
is an expensive task; a subset of the test cases is carefully selected from the existing test suite to:
(i) maximize the likelihood of uncovering new defects and
(ii) reduce the cost of testing.

5.8 Acceptance Testing


Developers/testers must keep in mind that the software is being built to satisfy the user requirements
and no matter how elegant its design is, it will not be accepted by the users unless it helps them achieve
their goals as specified in the requirements. After the software has passed all the system tests and defect
repairs have been made, the customer/client must be involved in the testing with proper planning. The
purpose of acceptance testing is to give the end user a chance to provide the development team with
feedback as to whether or not the software meets their needs. Ultimately, it’s the user that needs to be
satisfied with the application, not the testers, managers, or contract writers.
Acceptance testing is one of the most important types of testing we can conduct on a product. It
is more important to worry whether users are happy about the way a program works rather than whether
or not the program passes a bunch of tests that were created by testers in an attempt to validate the
requirements, that an analyst did their best to capture and a programmer interpreted based on their
understanding of those requirements.
Thus, acceptance testing is the formal testing conducted to determine whether a software system
satisfies its acceptance criteria and to enable buyers to determine whether to accept the system or not.
Acceptance testing must take place at the end of the development process. It consists of tests to
determine whether the developed system meets the predetermined functionality, performance, quality,
and interface criteria acceptable to the user. Therefore, the final acceptance acknowledges that the entire
software product adequately meets the customer’s requirements. User acceptance testing is different
from system testing. System testing is invariably performed by the development team which includes
developers and testers. User acceptance testing, on the other hand, should be carried out by end-users.
Thus, acceptance testing is designed to:
 Determine whether the software is fit for the user.
 Making users confident about the product.
 Determine whether a software system satisfies its acceptance criteria.
 Enable the buyer to determine whether to accept the system or not.
The final acceptance marks the completion of the entire development process. It happens when the
customer and the developer has no further problems.
Acceptance test might be supported by the testers. It is very important to define the acceptance
criteria with the buyer during various phases of SDLC. A well-defined acceptance plan will help
development teams to understand users’ needs. The acceptance test plan must be created or reviewed by
the customer. The development team and the customer should work together and make sure that they:
 Identify interim and final products for acceptance, acceptance criteria, and schedule.
 Plan how and by whom each acceptance activities will be performed.
 Schedule adequate time for the customer to examine and review the product.
 Prepare the acceptance plan.
 Perform formal acceptance testing at delivery.
 Make a decision based on the results of acceptance testing.
Entry Criteria Exit Criteria
System testing is complete and defects identified Acceptance decision is made for the software.
are either fixed or documented.
Acceptance plan is prepared and resources have In case of any warning, the development team is
been identified. notified.
Test environment for the acceptance testing is
available.

Types of Acceptance Testing


Acceptance testing is classified into the following two categories:
a) Alpha Testing Tests are conducted at the development site by the end users. The test environment
can be controlled a little in this case.
b) Beta Testing Tests are conducted at the customer site and the development team does not have any
control over the test environment.

a) ALPHA TESTING
Alpha is the test period during which the product is complete and usable in a test environment, but not
necessarily bug-free. It is the final chance to get verification from the customers that the trade-offs made
in the final development stage are coherent.
Therefore, alpha testing is typically done for two reasons:
(i) to give confidence that the software is in a suitable state to be seen by the customers (but not
necessarily released).
(ii) to find bugs that may only be found under operational conditions. Any other major defects or
performance issues should be discovered in this stage.
Since alpha testing is performed at the development site, testers and users together perform this
testing. Therefore, the testing is in a controlled manner such that if any problem comes up, it can be
managed by the testing team.
Entry Criteria Exit Criteria
All features are complete/testable (no urgent Get responses/feedbacks from customers.
bugs).
High bugs on primary platforms are Prepare a report of any serious bugs being
fixed/verified. noticed.
50% of medium bugs on primary platforms are Notify bug-fixing issues to developers.
fixed/verified.
All features have been tested on primary
platforms.
Performance has been measured/compared with
previous releases (user functions).
Usability testing and feedback (ongoing).
Alpha sites are ready for installation
b) BETA TESTING
Once the alpha phase is complete, development enters the beta phase. Beta is the test period during
which the product should be complete and usable in a production environment. The purpose of the beta
ship and test period is to test the company’s ability to deliver and support the product (and not to test
the product itself). Beta also serves as a chance to get a final ‘vote of confidence’ from a few customers
to help validate our own belief that the product is now ready for volume shipment to all customers.
Versions of the software, known as beta-versions, are released to a limited audience outside the
company. The software is released to groups of people so that further testing can ensure the product has
few or no bugs. Sometimes, beta-versions are made available to the open public to increase the feedback
field to a maximal number of future users.
Testing during the beta phase, informally called beta testing, is generally constrained to black-
box techniques, although a core of test engineers are likely to continue with white-box testing parallel
to beta tests. Thus, the term beta test can refer to a stage of the software—closer to release than being
‘in alpha’—or it can refer to the particular group and process being done at that stage. So a tester might
be continuing to work in white-box testing while the software is ‘in beta’ (a stage), but he or she would
then not be a part of ‘the beta test’ (group/activity).
Entry Criteria Exit Criteria
Positive responses from alpha sites. Get responses/feedbacks from the beta testers.
Customer bugs in alpha testing have been Prepare a report of all serious bugs.
addressed.
There are no fatal errors which can affect the Notify bug-fixing issues to developers.
functionality of the software.
Secondary platform compatibility testing is
complete.
Regression testing corresponding to bug fixes has
been done.
Beta sites are ready for installation.

Guidelines for Beta Testing


 Don’t expect to release new builds to beta testers more than once every two weeks.
 Don’t plan a beta with fewer than four releases.
 If you add a feature, even a small one, during the beta process, the clock goes back to the
beginning of eight weeks and you need another 3–4 releases.

You might also like