Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

ISTQB Concepts

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41
At a glance
Powered by AI
The key takeaways from the document are that testing is done to determine if software meets requirements, gain confidence in quality, provide information for decision making, and prevent defects. It also discusses ISTQB certification and the testing process.

The seven testing principles discussed are that testing shows presence of defects, exhaustive testing is impossible, early testing is important, defects tend to cluster, the pesticide paradox, testing is context dependent, and absence of errors is a fallacy.

The fundamental test process activities mentioned are test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting, and test closure activities.

TESTING CONCEPTS

ISTQB FOUNDATİON LEVEL


WHAT İS TESTİNG ?
 To determine whether the s/w product meets the
requirements, that they are fit and to detect defects
 Gaining confidence about the level of quality
 Providing information for decision-making
 Preventing defects

International Software Testing Qualifications Board is the one


which certifies a Tester and for that we need to clear ISTQB
exam. ISTQB certification is about raising common practices
to the level of best practices.

2
SEVEN TESTİNG PRİNCİPLES
 Testing shows presence of defects
 Exhaustive testing is impossible
 Early testing
 Defect clustering
 Pesticide paradox
 Testing is context dependent
 Absence-of-errors fallacy

3
FUNDAMENTAL TEST PROCESS

Test Planning and Control

Test Analysis and Design

Test Implementation and


Execution

Evaluating Exit Criteria and


Reporting

Test Closure Activities 4


TEST PLANNING AND CONTROL
 Here we go with the test objectives, business and
specifications etc.
 how the test strategy and project test plan apply to the
software under test.
 document any exceptions to the test strategy
 e.g. only one test case design technique needed for this
functional area because it is less critical
 other software needed for the tests, such as stubs and drivers,
and environment details
 Who all going to be involved and the requirements for test
strategy.
 set test completion criteria
TEST ANALYSIS AND DESIGN

 test specification can be broken down into three distinct tasks:


1. identify: determine ‘what’ is to be tested (identify
test conditions) and prioritise
2. design: determine ‘how’ the ‘what’ is to be tested
(i.e. design test cases)
3. build: implement the tests (data, scripts, etc.)
TEST IMPLEMENTATION AND EXECUTION
 Execute prescribed test cases with the test data in the right
environment.
 most important ones first
 would not execute all test cases if
 testing only fault fixes

 too many faults found by early test cases

 time pressure

 can be performed manually or automated


 Create the test suites for efficient test execution
 Maintain a test log and incidents should be reported.
 Able to fix the defects.
EVALUATİNG EXİT CRİTERİA AND REPORTİNG
 Need to check the test logs against the exit criteria in test
planning.
 Need to write the test summary report for stake holders.
 Follow the plan
 mark off progress on test script
 document actual outcomes from the test
 capture any other ideas you have for new test cases
 note that these records are used to establish that all test
activities have been carried out as specified
TEST CLOSURE ACTİVİTİES

 Finalize and archive test ware, such as scripts, the test


environment, and any other test infrastructure, for later reuse.

 Hand over test ware to the maintenance organization who will


support

 Evaluate how the testing went and analyze lessons learned


for future releases and projects.
TEST LEVELS

Component Testing

Integration Testing

System Testing

Acceptance Testing 10
V-MODEL: TEST LEVELS
Business Acceptance
Requirements Testing

Project Integration Testing


Specification in the Large

System System
Specification Testing

Design Integration Testing


Specification in the Small

Component
Code
Testing
COMPONENT TEST PROCESS
BEGIN

Component
Test Planning

Component
Test Specification

Component
Test Execution

Component
Test Recording

Checking for
Component END
Test Completion
INTEGRATION TESTING
 Tests the interface between components such as operating
systems , file system and hardware interfaces.

 Big bang integration testing:


here integrating all the components simultaneously.
Adv: everything is finished before integration.
Disadv: we cant trace the failure cause, as this is late
integration.
 Incremental Integration testing:

Adv: defects found early and easily


Disadv: time consuming as stubs and drivers are used.
Top down, bottom up, Functional incremental are the types in
this.
SYSTEM TESTING
 Functional system testing
depends on requirements specification.
 Non-functional system testing
depends on quality characteristics like performance
testing, load testing, stress testing, usability testing
maintainability testing , reliability testing .

Confirmation and Regression testing:


Testing a particular functionality to confirm that the defect was
fixed is called Confirmation Testing. Testing a functionality to
ensure that modifications in the software have not caused any
unintended side affects is called regression testing
USER ACCEPTANCE TESTING
 Final stage of validation
 customer (user) should perform or be closely involved
 customer can perform any test they wish, usually based on
their business processes
 final user sign-off
Methods:
 User Acceptance testing:
 Operational Acceptance Testing:
 Contract Acceptance Testing:
 Compliance Acceptance Testing:
 Alpha Testing:
 Beta Testing:
ALPHA AND BETA TESTS: DIFFERENCES
 Alpha testing :
This test takes place at the developer's site
Developers observe the users and note down the problems.

 Beta testing :
field testing, sends the system to a cross-section of
users who install it and use it under real-world working
conditions .
The user sends the record of incidents to development
organization.
TEST TECHNİQUES

17
STATİC TECHNİQUES
 Review Process
 Activities of a Formal Review
 Roles and Responsibilities
 Types of Review
 Success Factors of Reviews
 Static Analysis By Tools

18
ACTİVİTİES OF A FORMAL REVİEW

Individual
Planning Kick-off
Preperation

Review
Follow-up Rework
Meeting

19
ROLES AND RESPONSIBILITIES
 Moderator: The Moderator leads the review process, stores
the data and schedules the meeting.
 The Author: As the writer of the document under review.
 The Scribe: The scribe has to record each defect mentioned
and any suggestions for process improvement.
 The Reviewers: The task of the reviewers is to check any
material for defects, mostly prior to the meeting.
 The Manager: The manager is involved in the reviews as he
decides on the execution of reviews and allocates time in
project schedules and determines whether review process
objectives have been met. The manager can lay the role of
the reviewer based on his technical back ground.

20
TYPES OF REVİEW

Informal Technical
Walkthrough
Review Review

Managment
Inspection Audit
Review

21
STATİC ANALYSİS BY TOOLS
 Additional defects before the code is actually run. Thus, what
is called static analysis is just another form of testing.

 Static analysis is an examination of requirements, design and


code that differs from more traditional dynamic testing in a
number of important ways:
 For static analysis there are many tools, and most of them
focus on soft-ware code. Static analysis tools are typically
used by developers

 Coding standards, Code metrics, Code structure.


 Cyclomatic complexity- sum of the no. of binary decisions
stmts +1 22
DYNAMİC TEST DESİGN TECHNİQUES
 Specification-based or Black-box Techniques

 Structure-based or White-box Techniques

 Experience-based Techniques

23
24
SPECİFİCATİON-BASED OR BLACK-BOX
TECHNİQUES
 Equivalence Partitioning
 Boundary Value Analysis
 Decision Table Testing
 State Transition Testing
 Use Case Testing

25
EQUIVALENCE PARTITIONING (EP)

 divide (partition) the inputs, outputs, etc. into areas which


are the same (equivalent)
 assumption: if one value works, all will work
 one from each partition better than all from one

invalid valid invalid

0 1 100 101
BOUNDARY VALUE ANALYSIS (BVA)

 faults tend to lurk near boundaries


 good place to look for faults
 test values on both sides of boundaries

invalid valid invalid

0 1 100 101
DECISION TABLES
 add columns to the table for each unique
combination of input conditions.
 each entry in the table may be either ‘T’ for true,
‘F’ for false.

Input Conditions
Valid username T T T T F F F F
Valid password T T F F T T F F
Account in credit T F T F T F T F
STATE TRANSITION TESTING:
 Based on the machine rule , the number of states for a system
will differ from one state to another state.

 USE CASE TESTING:


It helps us to identify test cases that exercise the whole
system on a transaction by transaction basis start from finish.
Each use case describes the interactions the actor has with
the system in order to achieve a specific task. Use cases are
sequence of steps that describes the interactions between the
actor and the system.

29
STRUCTURE-BASED OR WHİTE-BOXTECHNİQUES
 Statement Testing and Coverage

 Decision Testing and Coverage

 Other Structure-based Techniques

30
EXAMPLE OF STATEMENT COVERAGE
Statement Coverage = Number of Statements Exercised X 100%
---------------------------------------------
Total Number Of Statements

Black box testing will achieve only 60 to 70% of statement


coverage.

Read A
Read B
C=A+2*B
IF C>50 then
PRINT large C
END IF
Test 1: Let A=2 and B = 3 then C= 8
Test 2: Let A=20 and B=25 then C= 70
So with Test 2 alone we were able to achieve 100% statement
coverage 31
DECISION COVERAGE(BRANCH COVERAGE)

 Decision Coverage = Number of Decision Outcomes exercised X100


--------------------------------------------------------
Total number of decision outcomes
Eg: Read A
Read B
C=A-2*B
IF C<0 THEN
PRINT C Negative
END IF
Let A = 20 and B=15 with this test we will have 100% statement coverage.
But decision coverage is not 100 as we haven’t tested the condition when C is greater
than 0.
Let A=10 and B=2 now C > 0.
So for 100% statement coverage we require only 1 test but for 100% decision
coverage we require 2 tests.
32
NON-SYSTEMATIC TEST TECHNIQUES
 Trial and error / Ad hoc
 Error guessing / Experience-driven
 User Testing
TEST MANAGEMENT
 Test Organization
 Estimation and Planning
 Test Process Monitoring, Test Reporting and Test Control
 Configuration Management
 Product and Project Risks
 Management of Incidents

34
TEST ORGANISATION

 Tester can be of programmer itself or tester from same or other manager.


 Recommended to do by independent Tester. WHY???
 Independent tester brings a different set of assumptions to testing and to
reviews which often expose hidden defects and problems related to groups
way of thinking.
 Test Lead:
involved in the planning, monitoring and control of the testing activities and
tasks. They will make sure test environment is put into place before test
execution starts
 Tester:
In the planning and preparation phases of the testing, testers should review
and contribute to test plans, as well as analyzing, reviewing and assessing
requirements and design specifications.
Should possess: Application or Business Domain Knowledge, Technology,
Testing knowledge
ESTIMATION AND PLANNING
 In Scope and Out Of Scope items
 Test Objectives
 Project and Product risks
 Constraints
 Critical and Non-Critical Aspects
 Assumptions
 Risks
 Software / Hardware required
 Trainings required
Estimation Techniques:
 Metric based (Top Down Approach) - Consulting people who will do the work
and other people with expertise on the tasks to be done.
 Expert Based ( Bottom Up Approach) – Analyzing metrics from past projects
and from industry data
TEST PROCESS MONITORING, TEST REPORTING
AND TEST CONTROL

 Test Monitoring- provide feed back and allowing


opportunities to improve the testing. provide the results . Able
to measure the test data and test coverage. Gathering data for
future projects.

 Reporting Test Status: effectively communicating our


findings to other project stake holders.

 Test Control:
It is about guiding and corrective actions to achieve the best
possible outcome for the project.

37
CONFIGURATION MANAGEMENT
 “The process of identifying and defining the configuration
items in a system,
 controlling the release and change of these items throughout
the system life cycle,
 recording and reporting the status of configuration items and
change requests,
 and verifying the completeness and correctness of
configuration items.”
Product and Project Risks:

 Risk is the possibility of a negative or undesirable outcome. It is a possibility


not a certainty.
 Product Risks: They are treated as Quality Risks .
a. Reading the requirements specifications, design specifications, user
documentation and other items.
b. Brain storming with many of the project stake holders.
c. One to one or small group sessions with the business and technology
experts in the company
 Project Risks: They include late delivery of the test items
We have four typical options for any Product or Project risks:
 Mitigate: Take steps in advance to reduce the likelihood of the risk.
 Contingency: Have a plan in place to reduce the impact .
 Transfer: Convince members of the team or project to accept the impact of
the risk.
 Ignore: When the impact of the risk is very low.

39
INCIDENT MANAGEMENT
 Incident: any event that occurs during testing that requires subsequent
investigation or correction.
 Defect Detection Percentage = Defects (Testers)
----------------------------------------------
Defects (Testers) +Defects (Field)

 actual results do not match expected results


 possible causes:
 software fault

 test was not performed correctly

 expected results incorrect

 can be raised for documentation as well as code


41

You might also like