Syit Se Unit4
Syit Se Unit4
Syit Se Unit4
Verification
● Verification can be defined as it is a process of estimating the
intermediary work products of a software development lifecycle to
verify that we are in the correct track of creating the end user
product.
● Now the question is: What is mean by the intermediary products? The
answers is the intermediary products consists of documents which
are generated during the development phases such as requirements
specification, design documents, database table design, ER
diagrams, test cases, traceability matrix etc.
● Sometimes we can avoid focusing on reviewing these documents but
reviewing these documents can helps us to find out most of the
glitches. If these glitches are found at later phases of the development
lifecycle then it will be very costly.
● Verification will helps to find out whether the software is of high
quality, but it will not guarantee that the system is useful. The main
focus of the verification to make the system well-engineered and free
from errors.
Validation
● Validation can be defined as it the process of evaluating the final
product to ensure that the software has meets with all the specified
requirements.
● It can also be defined as to show that the product fulfils its intended
use when deployed on suitable environment.
● In other words the test executions which we perform in our everyday
life are actually the validation activity such as functional testing,
regression testing, systems testing etc.
● Validation is performed in the software development lifecycle after the
verification phase.
Difference between Software Verification and Validation
Verification Validation
Concentrate on right way to build Concentrate on output of build
a system. system.
Verification can be defined as it is a Validation can be defined as it is the
process of estimating the process of evaluating the final
intermediary work products of a product to ensure that the
software development lifecycle to software has meets with all the
verify that we are in the correct specified requirements.
track of creating the end user
product.
The aim of Verification is to ensure The aim of Validation is to ensure
that the product being develop is as that the product - really meets up
per the requirements and design the user's requirements, and verify
specifications. whether the specifications be correct
in the first place.
Verification contains Reviews, Validation contains testing such as
Meetings and Inspections. black box testing, white box
testing, gray box testing etc.
Verification is conducted by quality Validation is conducted by testing
assurance team. team.
Execution of code is not a part of Execution of code is a part of
Verification. Validation.
Software Inspections
● Inspection in software engineering can be defined as it is a peer
review of any work product by qualified individuals who look for
defects with the help of a well defined process.
● It is also called as Fagan inspection after Michael Fagan, the inventor
of a famous software inspection process.
● An inspection is one of the most common kinds of review practices
found in software projects.
● The objective of the software inspection is to recognize defects.
● Software inspection is in which the software is reviewed for
glitches, errors and lapses.
● While inspecting a software system tester uses their knowledge about
the system, it application, programming language used, design model
to find errors.
Inspection roles
● Author: This is a person is one who has created the work product
that is going to be inspected.
● Moderator: This is considered as leader of the inspection. It plans
the inspection and manages it.
● Reader: This is the person who reads the documents, single item at
a time. The remaining inspectors then find outs the defects.
● Recorder/Scribe: This is a person is one who is responsible for
recording the defects that are found during the inspection.
● Inspector: The person that observes the work product to recognize
possible defects.
1. Planning
2. Overview
3. Individual Preparation
4. Inspection meeting
5. Rework
6. Follow-up
1. Planning
2. Overview
3. Individual Preparation
4. Inspection meeting
● During this phase the reader can go through the entire work product
to recognize the defect if any. Inspector has to note down these
defects and try to resolve it.
5. Rework
6. Follow-up
Software Testing
System Testing
● System Testing is testing of entire and completely integrated
software product.
● System Testing is sequence of various tests whose work is to test the
complete computer based system.
● System testing is end-to-end testing. i.e, we test system from log in
module to log out module.
● System testing contains both functional (check whether requirements
of user are fulfilled or not) as well as Non-Functional testing (check
whether expectations of user are fulfilled or not).
● System Testing is Black box Testing.
● System Testing have more than 50 types. Below we discuss some of
them
3. Regression Testing
● Regression Testing is type of software testing which is used to check
whether changes has been made in code due to some error or
change in requirement does not affect existing working
functionality.
● In Regression Testing, we execute already executed test cases to give
assurance that old functionalities work well after performing changes in
code.
● This testing is performed to give guarantee that new code added in
our software does not disturb working of existing functionalities.
4. Recovery Testing
● Recovery Testing is done to specify whether system recovers itself
after the system crash due to disaster such as power failure or
network not available etc.
● In recovery testing, system perform rollback i.e. identify the point where
the behaviour of the system was correct and then from that point again
perform the operations up to the point where system get crashed.
5. Migration Testing
● Migration testing is performed to give assurance that the software can
be moved from older system infrastructures to current system
infrastructures without any problem.
● In Migration testing, we check data from old system for the
compatibility with new system.
6. Functional Testing
● Functional testing is a type of software testing which checks that every
function present in our software application works as per requirements
of user.
● This testing includes black box testing and it does not focus on the
source code of the software application.
● Every functionality of the system is verified by tester using appropriate
test data and compare actual result with expected result.
● If there is difference between actual result and expected result then bug
is detected. Functional testing is done using Requirement Specification
Document
7. Hardware/Software Testing
● In Hardware/Software Testing, we perform testing of communication
between the hardware and software used in our system. IBM called
the Hardware/Software Testing as "HW/SW Testing".
Component Testing
● Component testing is defined as a software testing type, in which the
testing is performed on each individual component separately
without integrating with other components.
● It's also referred to as Module Testing because it perform testing on
single module at a time.
● Generally, any software as a whole is made of several components.
● Component Level Testing deals with testing these components
individually.
● It's one of most frequent black box testing types which is performed by
QA Team.
This entire table may be created in Word, Excel or any other Test
management tool. That's all to Test Case Design
Test Automation
What is Automation Testing?
● Automation Testing or Test Automation is a software testing
technique that performs using special automated testing software
tools to execute a test case suite.
● On the contrary, Manual Testing is performed by a human sitting in
front of a computer carefully executing the test steps.
● The automation testing software can also enter test data into the
System Under Test, compare expected and actual results and generate
detailed test reports.
● Software Test Automation demands considerable investments of money
and resources.
● Using a test automation tool, it's possible to record this test suite and
re-play it as required.
● Once the test suite is automated, no human intervention is required.
● This improved ROI of Test Automation.
● The goal of Automation is to reduce the number of test cases to be run
manually and not to eliminate Manual Testing altogether.
Software Measurement
● To assess the quality of the engineered product or system and to better
understand the models that are created, some measures are used.
● These measures are collected throughout the software development life
cycle with an intention to improve the software process on a continuous
basis.
● Measurement helps in estimation, quality control, productivity
assessment and project control throughout a software project.
● Also, measurement is used by software engineers to gain insight into
the design and development of the work products. In addition,
measurement assists in strategic decision-making as a project proceeds.
● Software measurements are of two categories, namely, direct
measures and indirect measures.
● Direct measures include software processes like cost and effort
applied and products like lines of code produced, execution speed,
and other defects that have been reported.
● Indirect measures include products like functionality, quality,
complexity, reliability, maintainability, and many more.
Size-Oriented Metrics
● Size Oriented Metrics derived by normalizing quality and productivity
Point Metrics measures by considering size of the software that has
been produced.
● It is a direct measure of a Software.
● The size measurement is based on lines of code computation.
● In size oriented metrics, the number of physical lines of active code is
used for estimating the productivity of software.
● In general the higher the SLOC in a module the less understandable
and maintainable the module is.
● For estimating the cost of the project using size of the software the lines
of code per month i.e. LOC/pm is used.
● It may include the following measurement :
l. Number of person-months of efforts.
2. Cost of development
3. Pages of documentation created.
4. Number of error reported prior to release.
5. Number of defects encountered after release.
6. Number of people on the project.
Project alpha:
● 12,100 lines of code were developed with 24 person-months of effort
at a cost 0f $168,000.
● It should be noted that the effort and cost recorded in the table
represents all software engineering activities (analysis, design, code
and test) not just coding.
● Further information for project alpha indicates that 365 pages of
documentation were developed, 134 error were recorded before the
software was released and 29 defects were encountered after release to
the customer within the first year of the operation.
● Three people worked on the development of software for project alpha.
● To develop metrics that can be assimilated with similar metrics from
other projects, we choose Line of Code as our normalization value.
● So, from information given in table , a set of simple size-oriented
metrics can be developed for each projects:
● Errors per KLOC (Thousands line of code)
● Defects per KLOC,
● $ Per KLOC,
● Pages of documentation per KLOC
● Size-oriented metrics are not universally accepted as the best way to
measure the software process.
Function-Oriented Metrics
● Function-oriented software metrics use a measure of the functionality
delivered by the application as a normalization value.
● Since ‘functionality’ cannot be measured directly, it must be derived
indirectly using other direct measures.
● Function-oriented metrics were first proposed by Albrecht, who
suggested a measure called the function point.
● Function points are derived using an empirical relationship based
on countable (direct) measures of software's information domain and
assessments of software complexity.
● Function points are computed by completing the table as shown below.
● Five information domain characteristics are determined and counts are
provided in the appropriate table location.
● Information domain values are defined in the following manner:
Number of user inputs: Each user input that provides distinct application
oriented data to the software is counted. Inputs should be distinguished from
inquiries, which are counted separately.
Once these data have been collected, a complexity value is associated with
each count. Organizations that use function point methods develop criteria for
determining whether a particular entry is simple, average, or complex.
Nonetheless, the determination of complexity is somewhat subjective.
FP = count total [0.65 + 0.01 Σ(Fi)] where count total is the sum of all FP
entries .
Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential). The constant values in
Equation and the weighting factors that are applied to information domain
counts are determined empirically.
Once function points have been calculated, they are used in a manner
analogous to LOC as a way to normalize measures for software productivity,
quality, and other attributes.
● This technique is used to estimate the project cost when we have very
small amount of information about our software project which we want
to create.
● So, this technique does not give accurate estimation for cost.
● The primary profits of this technique are less cost required to apply this
technique and it give quick results.
2. Parametric Estimating
● Parametric estimation uses historical data to calculate cost estimates.
● Parametric estimating use a statistical relationship between historical
data and other variables such as lines of code in software development
to compute an estimate for different parameters like scope, cost,
budget, and schedule.
● If we produce this technique correctly then this technique produce
estimation with high levels of accuracy.
3. Three-Point Estimates
● This technique is used to decrease the biases and irregularity in
estimating assumptions.
● Three estimates are find out instead of one estimate and then their
average is used to reduce the uncertainties, risks, and biases.
● PERT (Program Evaluation and Review Technique) is the mostly
used method in three-point estimation technique.
● PERT method contains estimates which are as follows:
1. Most Likely Cost (Cm): Here, we assume normal case and all things are
performed as usual.
2. Pessimistic Cost (Cp): Here, we assume the abnormal case and suppose
that almost all things go wrong.
3. Optimistic Cost (Co): Here, we assume the best case and consider that
all things performed better than we planned.