Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Syit Se Unit4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Unit 4 : ch1:Verification and Validation

Verification
● Verification can be defined as it is a process of estimating the
intermediary work products of a software development lifecycle to
verify that we are in the correct track of creating the end user
product.
● Now the question is: What is mean by the intermediary products? The
answers is the intermediary products consists of documents which
are generated during the development phases such as requirements
specification, design documents, database table design, ER
diagrams, test cases, traceability matrix etc.
● Sometimes we can avoid focusing on reviewing these documents but
reviewing these documents can helps us to find out most of the
glitches. If these glitches are found at later phases of the development
lifecycle then it will be very costly.
● Verification will helps to find out whether the software is of high
quality, but it will not guarantee that the system is useful. The main
focus of the verification to make the system well-engineered and free
from errors.

Validation
● Validation can be defined as it the process of evaluating the final
product to ensure that the software has meets with all the specified
requirements.
● It can also be defined as to show that the product fulfils its intended
use when deployed on suitable environment.
● In other words the test executions which we perform in our everyday
life are actually the validation activity such as functional testing,
regression testing, systems testing etc.
● Validation is performed in the software development lifecycle after the
verification phase.
Difference between Software Verification and Validation
Verification Validation
Concentrate on right way to build Concentrate on output of build
a system. system.
Verification can be defined as it is a Validation can be defined as it is the
process of estimating the process of evaluating the final
intermediary work products of a product to ensure that the
software development lifecycle to software has meets with all the
verify that we are in the correct specified requirements.
track of creating the end user
product.
The aim of Verification is to ensure The aim of Validation is to ensure
that the product being develop is as that the product - really meets up
per the requirements and design the user's requirements, and verify
specifications. whether the specifications be correct
in the first place.
Verification contains Reviews, Validation contains testing such as
Meetings and Inspections. black box testing, white box
testing, gray box testing etc.
Verification is conducted by quality Validation is conducted by testing
assurance team. team.
Execution of code is not a part of Execution of code is a part of
Verification. Validation.

Verification process gives Validation process gives explanation


explanation about whether the about acceptance of software by the
outputs are as per the inputs or not. user or not.
Plans, Requirement Specifications, Testing of actual software is done in
Design Specifications, Code, and validation.
TestCases etc. are evaluated in
verification.
Goals of Verification and Validation
The goals of verification and Validation are as follow:
1. Correctness
The degree to which the product is fault free.
2. Consistency
The degree to which the product is in consistent form within itself and with
other products.
3. Necessity
The degree to which everything in the product is necessary.
4. Sufficiency
The degree to which the product is complete.
5. Performance
The degree to which the product satisfies its performance requirements.

PLANNING VERIFICATION AND VALIDATION


V-Model
● V-Model also referred to as the Verification and Validation Model. In
this, each phase of SDLC must complete before the next phase starts.
● It follows a sequential design process same as the waterfall model.
● Testing of the device is planned in parallel with a corresponding stage of
development.
V-Model contains Verification phases on one side of the Validation phases on
the other side. Verification and Validation process is joined by coding phase in
V-shape. Thus it is known as V-Model.

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where product


requirements understood from the customer's side. This phase contains
detailed communication to understand customer's expectations and
exact requirements.
2. System Design: In this stage system engineers analyse and interpret
the business of the proposed system by studying the user
requirements document.
3. Architecture Design: The baseline in selecting the architecture is that it
should understand all which typically consists of the list of modules,
brief functionality of each module, their interface relationships,
dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular
phase. It is also known as High-Level Design
4. Module Design: In the module design phase, the system breaks down
into small modules. The detailed design of the modules is specified,
which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based on
the requirements, a suitable programming language is decided.
There are some guidelines and standards for coding. Before checking in
the repository, the final build is optimized for better performance, and
the code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of


V-model:
1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to eliminate
errors at code level or unit level. A unit is the smallest entity which
can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated
from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during the
Architectural Design Phase. These tests verify that groups created
and tested independently can coexist and communicate among
themselves.
3. System Testing: System Tests Plans are developed during System
Design Phase. System Test ensures that expectations from an
application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business
requirement analysis part. It includes testing the software product in
user atmosphere. Acceptance tests reveal the compatibility problems
with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like
load and performance defects within the real user atmosphere.

Software Inspections
● Inspection in software engineering can be defined as it is a peer
review of any work product by qualified individuals who look for
defects with the help of a well defined process.
● It is also called as Fagan inspection after Michael Fagan, the inventor
of a famous software inspection process.
● An inspection is one of the most common kinds of review practices
found in software projects.
● The objective of the software inspection is to recognize defects.
● Software inspection is in which the software is reviewed for
glitches, errors and lapses.
● While inspecting a software system tester uses their knowledge about
the system, it application, programming language used, design model
to find errors.

Inspection roles
● Author: This is a person is one who has created the work product
that is going to be inspected.
● Moderator: This is considered as leader of the inspection. It plans
the inspection and manages it.
● Reader: This is the person who reads the documents, single item at
a time. The remaining inspectors then find outs the defects.
● Recorder/Scribe: This is a person is one who is responsible for
recording the defects that are found during the inspection.
● Inspector: The person that observes the work product to recognize
possible defects.

The Inspection Process


● The inspections process contains the following phases:
● Phases of inspection process

1. Planning

2. Overview

3. Individual Preparation
4. Inspection meeting

5. Rework

6. Follow-up

1. Planning

● The planning of inspection is done by the moderator.


● A moderator is responsible for scheduling of the review.
● The project planning requires allowing time for review and reworking
activities, thus providing engineers with time to carefully participate in
reviews.
● There is an entry check performed on the documents and it is
determined that which documents are to be considered or not.

2. Overview

● The author describes the background of the work product.


● The objective of this phase is to discuss the criteria’s for the entry and
exit results.
● In other words this phase gives introduction on the objectives of the
review and the documents.
● During this phase frequency of checking, assignments of roles,
number of pages to be checked and possible other questions are
discussed.

3. Individual Preparation

● Every inspector observes the work product to recognize possible


defects. This can be very useful for the reviewer to make document very
effective.

4. Inspection meeting

● During this phase the reader can go through the entire work product
to recognize the defect if any. Inspector has to note down these
defects and try to resolve it.
5. Rework

● The author makes changes to the work product according to the


action plans from the inspection meeting.
● Whatever changes are made to the document by the author needs to
easily recognize in the follow-up phase so it is very important that
author has to indicate that changes properly

6. Follow-up

● The changes by the author are checked to ensure everything is


correct.
● After the rework, the moderator make sure that agreeable actions
have been taken on all logged defects, improvement suggestions and
change requests.
● The lot of measurements are gathered by the moderator during the every
phase to manage and optimize review process.

AUTOMATED STATIC ANALYSIS


● Static analysis are one form of inspection where the program is
inspected for error even without actually executing them, these
programs are run through a checklist of errors and heuristics which
helps in identifying the common error in the program.
● It is possible to automate few of the checklist and heuristics and
running the programs through these automated tools for error
finding.
● To fulfil the requirement of automation in such type of inspections,
an Automated Static analyzer is developed for all the programming
languages.
● Automated Static analyzer are software tools which are used for
source code processing and verification.
● The tool parse the program text and try to discover potentially
erroneous conditions and bring these to the attention of the V & V
team. These tools are cost effective.
● Following are the stages involved in the Static Analysis :
● Control flow analysis : This stage helps in checking for loops with
multiple exit or entry points, finds unreachable code, etc.
(Unreachable codes are coded which are within a branch or condition
and surrounded by multiple go-to statement)
● Data use analysis : This stage helps in detecting uninitialized
variables, variables which written twice without an intervening
assignment, variables which are declared but never used. Data use
analysis also discovers ineffective tests where the test condition is
redundant. (Redundant conditions are conditions that are either always
true or always false.)
● Interface analysis: This stage helps in checking the consistency of
routine and procedure declarations and their use. Interface analysis
can detect type errors in languages like FORTRAN and C. Interface
analysis can also detect functions and procedures that are declared
and never called or function results that are never used.
● Information flow analysis: Helps in identifying the dependencies of
input and output variables. It also helps in displaying the condition
that affects variable’s value.
● Path analysis: Identifies paths through the program and sets out the
statements executed in that path. Again, potentially useful in the
review process.
● Below table briefly describes the checks points which are involved in
automated static analysis.

Fault Class Static Analysis Checks


Data Faults ● Variables used before initialization
● Variables declared but never used
● Variables assigned twice but never used
between assignments
● Possible array bound violations
● Undeclared variables
Control Faults ● Unreachable code
● Unconditional branches into loops
Input/Output Faults ● Variables output twice with no
intervening assignment
Interface Faults ● Parameter type mismatches
● Parameter number mismatches
● Non-usage of the results of functions
● Uncalled functions and procedures
Storage Management ● Unassigned Pointers
Fault ● Memory leak

Software Testing

● Software testing is a procedure to verify whether the actual results are


same as of the expected results.
● Software testing is performed to provide assurance that the software
system does not contain any defects.
● Defects mean errors which are detected when actual result of our
software module is not identical as expected result.
● Software testing helps us to find out all user requirement are fulfilled
by our software or not.
● Software quality is depend on at what extend software fulfil user
requirements and number of defects occurred in software.
● Software testing is used to give assurance that we deliver quality
product to customer.
● There are two types of software testing methods:
Types of Software testing Methods
1. Manual testing
2. Automation testing
(1) Manual testing: In manual testing, tester treats himself as end user and
test whole software manually.
(2) Automation testing: In automation testing, tester use automation tools
like selenium, Mentis, Quality Test pro(QTP), HP ALM(Application Life
cycle Management) to test the software product. In automation testing,
testing tools test the software and generate Defect report.

There are two types of Software Testing such as


1. Black Box Testing
2. White Box Testing
1. Black Box Testing:
● In this type of testing, we check working of our software project by
running it.
● We do not need to check code of our software in this testing type.
● Tester has no knowledge about the internal working of the software.
● System testing is included in black box testing
2. White Box Testing:
● In white box testing we check internal working of our code.
● So in this type of testing tester need deep knowledge of programming
language which is used in our software product.

System Testing
● System Testing is testing of entire and completely integrated
software product.
● System Testing is sequence of various tests whose work is to test the
complete computer based system.
● System testing is end-to-end testing. i.e, we test system from log in
module to log out module.
● System testing contains both functional (check whether requirements
of user are fulfilled or not) as well as Non-Functional testing (check
whether expectations of user are fulfilled or not).
● System Testing is Black box Testing.
● System Testing have more than 50 types. Below we discuss some of
them

Types of System Testing


1. Usability Testing
2. Load Testing
3. Regression Testing
4. Recovery Testing
5. Migration Testing
6. Functional Testing
7. Hardware/Software Testing
1. Usability Testing
● Usability Testing is a type of software testing in which a group of
end-users of software system use software to check user friendliness of
software.
● It is non-functional testing.
● This testing basically concentrate on the how easily user handle the
application.
● We verify that after few hour training, end user can use system
comfortably.
● This type of testing is also known as User Experience Testing.
● This testing is suggested to perform at the initial design phase of SDLC
because it find out bugs(errors) in user interface and give chance to
improve them.
2. Load Testing
● Load testing is a type of Performance Testing which specifies
performance of system under real-life load conditions.
● Load testing helps to specify how the application behaves when
multiple users access it at same time.
● This testing normally used to find out:
1. The maximum number of users can access the software at same time
2. Specifies whether currently available infrastructure i.e. software and
hardware is adequate to execute the application.
3. Check what happen when maximum numbers of users access at the
system same.
4. Scalability (action taken to increase capacity of system such as increase
storage etc.) to permit to more number of users to access our application.
● It is non-functional type testing.
● Load testing is normally used when we test Client/Server based
applications and Web based applications.

3. Regression Testing
● Regression Testing is type of software testing which is used to check
whether changes has been made in code due to some error or
change in requirement does not affect existing working
functionality.
● In Regression Testing, we execute already executed test cases to give
assurance that old functionalities work well after performing changes in
code.
● This testing is performed to give guarantee that new code added in
our software does not disturb working of existing functionalities.
4. Recovery Testing
● Recovery Testing is done to specify whether system recovers itself
after the system crash due to disaster such as power failure or
network not available etc.
● In recovery testing, system perform rollback i.e. identify the point where
the behaviour of the system was correct and then from that point again
perform the operations up to the point where system get crashed.

5. Migration Testing
● Migration testing is performed to give assurance that the software can
be moved from older system infrastructures to current system
infrastructures without any problem.
● In Migration testing, we check data from old system for the
compatibility with new system.

6. Functional Testing
● Functional testing is a type of software testing which checks that every
function present in our software application works as per requirements
of user.
● This testing includes black box testing and it does not focus on the
source code of the software application.
● Every functionality of the system is verified by tester using appropriate
test data and compare actual result with expected result.
● If there is difference between actual result and expected result then bug
is detected. Functional testing is done using Requirement Specification
Document
7. Hardware/Software Testing
● In Hardware/Software Testing, we perform testing of communication
between the hardware and software used in our system. IBM called
the Hardware/Software Testing as "HW/SW Testing".

Component Testing
● Component testing is defined as a software testing type, in which the
testing is performed on each individual component separately
without integrating with other components.
● It's also referred to as Module Testing because it perform testing on
single module at a time.
● Generally, any software as a whole is made of several components.
● Component Level Testing deals with testing these components
individually.
● It's one of most frequent black box testing types which is performed by
QA Team.

Component Testing Techniques


Based on depth of testing levels, Component testing can be categorized as
1. CTIS - Component Testing In Small
2. CTIL - Component Testing In Large
1. CTIS – Component Testing in Small
Component testing may be done with isolation of rest of other components in
the software or application under test. If it's performed with the isolation of
other component, then it's referred as Component Testing in Small.
Example 1: Consider a website which has 5 different web pages then testing
each webpage separately & with the isolation of other components is referred
as Component testing in Small.
2. CTIL - Component Testing In Large
Component testing done without isolation of other components in the
software or application under test is referred as Component Testing Large.
Let's take an example to understand it in a better way.
● Suppose there is an application consisting of three components say
Component A, Component B, and Component C.
● The developer has developed the component B and wants it tested. But
in order to completely test the component B, few of its functionalities
are dependent on component A and few on component C.
● Functionality Flow: A -> B -> C which means there is a dependency to
B from both A & C.
● In that case, to test the component B completely, we can replace the
component A and component C by stub and drivers as required.
● “Drivers” are the dummy programs which are used to call the
functions of the lowest module in case the calling function does not
exist.
● “Stubs” can be referred to as code a snippet which accepts the
inputs/requests from the top module and returns the results/
response.

Test Case Design


● Test Case is a group of negative and positive scenario which are
executed to test specific functionality of our software application.
● A test case is a document which includes set of test data,
preconditions, expected results and actual result created for a
specific test scenario which has purpose to test whether the particular
functionality fulfil specific requirement of end user or not.
● Test Case acts as the starting point for the test execution, and after
applying a set of input data, the application has a specific output which
is called as actual result and we compare this actual result with
expected result. If actual and expected result are not same then bug
(error)is detected.
We can write test cases like:
Test Case 1: Check output after entering valid User name and valid
Password login page
Test Case 2: Check output after entering invalid User name and invalid
Password on log in page
Test Case 3: Check result leaving User name and password field empty
and press Login Button
Test Case 4: Check result leaving User name field empty and fill password
field and press Login Button
Test Case 5: Check result on entering invalid User name and valid
password on log in page.
Test Case 6: Check result on entering valid User name and invalid
password on log in page

Typical Test Case Parameters:


⮚ Test Case ID
⮚ Test Scenario
⮚ URL(optional)
⮚ Test Steps
⮚ Test Data
⮚ Expected Result
⮚ Actual Result
⮚ Pass or fail

Following Is Format Of A Standard Login Test Case

Test Test Test Steps Test Data Expected Actu Pass/Fail


Case Scenario Results al
ID Resu
lts

TU0 Check 1. Go to Userid = User As Pass


1 Custome site http: guru99 should Expe
r Login //demo.g Password = Login cted
with uru99.co pass99 into an
m
valid 2. Enter applicatio
Data UserId n
3. Enter
Passwor
d
4. Click
Submit

TU0 Check 1. Go to Userid = User As Pass


2 Custome site http: guru99 should Expe
r Login //demo.g Password = not Login cted
with uru99.co glass99 into an
invalid m applicatio
Data 2. Enter n
UserId
3. Enter
Passwor
d
4. Click
Submit

This entire table may be created in Word, Excel or any other Test
management tool. That's all to Test Case Design

Test Automation
What is Automation Testing?
● Automation Testing or Test Automation is a software testing
technique that performs using special automated testing software
tools to execute a test case suite.
● On the contrary, Manual Testing is performed by a human sitting in
front of a computer carefully executing the test steps.
● The automation testing software can also enter test data into the
System Under Test, compare expected and actual results and generate
detailed test reports.
● Software Test Automation demands considerable investments of money
and resources.
● Using a test automation tool, it's possible to record this test suite and
re-play it as required.
● Once the test suite is automated, no human intervention is required.
● This improved ROI of Test Automation.
● The goal of Automation is to reduce the number of test cases to be run
manually and not to eliminate Manual Testing altogether.

Software Measurement
● To assess the quality of the engineered product or system and to better
understand the models that are created, some measures are used.
● These measures are collected throughout the software development life
cycle with an intention to improve the software process on a continuous
basis.
● Measurement helps in estimation, quality control, productivity
assessment and project control throughout a software project.
● Also, measurement is used by software engineers to gain insight into
the design and development of the work products. In addition,
measurement assists in strategic decision-making as a project proceeds.
● Software measurements are of two categories, namely, direct
measures and indirect measures.
● Direct measures include software processes like cost and effort
applied and products like lines of code produced, execution speed,
and other defects that have been reported.
● Indirect measures include products like functionality, quality,
complexity, reliability, maintainability, and many more.
Size-Oriented Metrics
● Size Oriented Metrics derived by normalizing quality and productivity
Point Metrics measures by considering size of the software that has
been produced.
● It is a direct measure of a Software.
● The size measurement is based on lines of code computation.
● In size oriented metrics, the number of physical lines of active code is
used for estimating the productivity of software.
● In general the higher the SLOC in a module the less understandable
and maintainable the module is.
● For estimating the cost of the project using size of the software the lines
of code per month i.e. LOC/pm is used.
● It may include the following measurement :
l. Number of person-months of efforts.
2. Cost of development
3. Pages of documentation created.
4. Number of error reported prior to release.
5. Number of defects encountered after release.
6. Number of people on the project.

If a software organization maintains simple records, a table of


size-oriented measures, such as shown in below fig.

Project alpha:
● 12,100 lines of code were developed with 24 person-months of effort
at a cost 0f $168,000.
● It should be noted that the effort and cost recorded in the table
represents all software engineering activities (analysis, design, code
and test) not just coding.
● Further information for project alpha indicates that 365 pages of
documentation were developed, 134 error were recorded before the
software was released and 29 defects were encountered after release to
the customer within the first year of the operation.
● Three people worked on the development of software for project alpha.
● To develop metrics that can be assimilated with similar metrics from
other projects, we choose Line of Code as our normalization value.
● So, from information given in table , a set of simple size-oriented
metrics can be developed for each projects:
● Errors per KLOC (Thousands line of code)
● Defects per KLOC,
● $ Per KLOC,
● Pages of documentation per KLOC
● Size-oriented metrics are not universally accepted as the best way to
measure the software process.

Function-Oriented Metrics
● Function-oriented software metrics use a measure of the functionality
delivered by the application as a normalization value.
● Since ‘functionality’ cannot be measured directly, it must be derived
indirectly using other direct measures.
● Function-oriented metrics were first proposed by Albrecht, who
suggested a measure called the function point.
● Function points are derived using an empirical relationship based
on countable (direct) measures of software's information domain and
assessments of software complexity.
● Function points are computed by completing the table as shown below.
● Five information domain characteristics are determined and counts are
provided in the appropriate table location.
● Information domain values are defined in the following manner:
Number of user inputs: Each user input that provides distinct application
oriented data to the software is counted. Inputs should be distinguished from
inquiries, which are counted separately.

Number of user outputs: Each user output that provides application oriented


information to the user is counted. In this context output refers to reports,
screens, error messages, etc. Individual data items within a report are not
counted separately.
Number of user inquiries: An inquiry is defined as an on-line input that
results in the generation of some immediate software response in the
form of an on-line output. Each distinct inquiry is counted.
Number of files: Each logical master file (i.e., a logical grouping of data that
may be one part of a large database or a separate file) is counted.
Number of external interfaces: All machine readable interfaces (e.g., data
files on storage media) that are used to transmit information to another system
are counted.

Once these data have been collected, a complexity value is associated with
each count. Organizations that use function point methods develop criteria for
determining whether a particular entry is simple, average, or complex.
Nonetheless, the determination of complexity is somewhat subjective.

To compute function points (FP), the following relationship is used:

 FP = count total [0.65 + 0.01 Σ(Fi)] where count total is the sum of all FP
entries . 

The Fi (i = 1 to 14) are "complexity adjustment values" based on responses


to the following questions :
1. Does the system require reliable backup and recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational
environment?
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input transaction to be built over
multiple screens or operations?
8. Are the master files updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the
user?

Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential). The constant values in
Equation and the weighting factors that are applied to information domain
counts are determined empirically.
Once function points have been calculated, they are used in a manner
analogous to LOC as a way to normalize measures for software productivity,
quality, and other attributes.

Software Cost Estimation


Software Productivity
● In standard economic terms, we can define productivity as the rate of the
amount of goods or services produced and number of employee and cost
that consumed for producing that good or service.
● We can compute productivity of manufacturing system, by counting the
number of units that are produced by that manufacturing system and by
dividing this quantity by the number of hours of person required to
create them.
● In simple language we can say that Productivity define ratio between
the amount of software produced and the cost utilized to produce it.
This cost may be spend on salary of employee, sporting hardware
and software cost etc.

● This Software Productivity calculation help us to define the cost and


schedule estimation for software project
● There are two major factors which are used to measure the
productivity:
1) One is efforts or number of months of employee needed to build the
system. We can say this factor as input measure
2) Second is the size of the software product. We can say this measure as
output measure
● We need to create Software metrics that can be used as quantifiable
criteria for different characteristics of a software system or software
development process.
● There are two types of metric that have been used
Types of Metric
1. Size-related metrics
2. Function-related metrics
1. Size-related metrics
Usually size-related metric used is number of lines of source code.
2. Function-related metrics
Function-related metrics are associated with overall functionality of the
delivered software.
Software Productivity is define as the quantity of useful functionality
produced in some specific time.
Function points and application points are the well known metrics of this
type.
A function point is computed by combining number of different
measurements such as

⮚ External inputs and outputs


⮚ User interactions
⮚ External interfaces
⮚ Files used by the system
Application points are an optional to function points which is used with
languages such as database programming languages or scripting languages.
Application points are estimated using:
1. The number of separate screens that are displayed.
2. The number of reports that are produced
3. The number of modules in imperative programming languages such as
Java or C++.
Estimation Techniques
There are four Estimation Techniques for cost estimation in project
management
Estimation Techniques
1. Analogous Estimating
2. Parametric Estimating
3. Three Point Estimating
4. Bottom up Estimating
These four estimation techniques present a hierarchical structure for cost
estimation of project where analogous estimating technique provide the least
accuracy and bottom-up estimation provide more accuracy.
1. Analogous Estimating
● In analogous estimation technique, the cost of the software project is
estimated by comparing current software project with similar types
of projects which are completed by our organization in past.

● This technique is used to estimate the project cost when we have very
small amount of information about our software project which we want
to create.
● So, this technique does not give accurate estimation for cost.
● The primary profits of this technique are less cost required to apply this
technique and it give quick results.
2. Parametric Estimating
● Parametric estimation uses historical data to calculate cost estimates.
● Parametric estimating use a statistical relationship between historical
data and other variables such as lines of code in software development
to compute an estimate for different parameters like scope, cost,
budget, and schedule.
● If we produce this technique correctly then this technique produce
estimation with high levels of accuracy.
3. Three-Point Estimates
● This technique is used to decrease the biases and irregularity in
estimating assumptions.
● Three estimates are find out instead of one estimate and then their
average is used to reduce the uncertainties, risks, and biases.
● PERT (Program Evaluation and Review Technique) is the mostly
used method in three-point estimation technique.
● PERT method contains estimates which are as follows:
1. Most Likely Cost (Cm): Here, we assume normal case and all things are
performed as usual.
2. Pessimistic Cost (Cp): Here, we assume the abnormal case and suppose
that almost all things go wrong.
3. Optimistic Cost (Co): Here, we assume the best case and consider that
all things performed better than we planned.

PERT estimate formula is: Ce = (Co + 4Cm+ Cp)/6


Where, Ce = Expected Cost
● Estimate find out from this technique is better than the two techniques
see have seen before because this decreases the biased view from the
data and give a more accurate estimation of cost.
4. Bottom-up Estimating
● The bottom-up estimating technique is also called as the definitive
technique".
● This estimation technique is the most accurate but time-consuming and
costly for estimating the cost of our project.
● In this technique, the cost of each activity is estimated with the greatest
level of information at the bottom level and then combines it to
compute the total cost of project.
● In this technique, the whole project work is divided into the smallest
modules.
● Cost of every module is estimated and after that, it is combined to
state the cost estimate of project

You might also like