Unit III
Unit III
Unit III
Cardinality
Cardinality state the number of events of one object related to the number of events of another object.
Modality
• If an event relationship is an optional then the modality of relationship is zero.
• If an event of relationship is compulsory then modality of relationship is one.
Analysis Modeling
• Analysis model operates as a link between the 'system description' and the 'design model'.
• In the analysis model, information, functions and the behaviour of the system is defined and these are
translated into the architecture, interface and component level design in the 'design modeling'.
The rules of thumb that must be followed while creating the analysis model.
1. Abstraction
• A solution is stated in large terms using the language of the problem environment at the highest level
abstraction.
• The lower level of abstraction provides a more detail description of the solution.
• A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
• A collection of data that describes a data object is a data abstraction.
2. Architecture
• The complete structure of the software is known as software architecture.
• Structure provides conceptual integrity for a system in a number of ways.
• The architecture is the structure of program modules where they interact with each other in a
specialized way.
• The components use the structure of data.
• The aim of the software design is to obtain an architectural framework of a system.
• The more detailed design activities are conducted from the framework.
3. Patterns
A design pattern describes a design structure and that structure solves a particular design problem in a
specified content.
4. Modularity
• Software is separately divided into name and addressable components. Sometime they are called as
modules which integrate to satisfy the problem requirements.
• Modularity is the single attribute of software that permits a program to be managed easily.
5. Information hiding
Modules must be specified and designed so that the information like algorithm and data presented in a
module is not accessible for other modules not requiring that information.
6. Functional independence
• The functional independence is the concept of separation and related to the concept of modularity,
abstraction and information hiding.
• The functional independence is accessed using two criteria i.e. Cohesion and coupling.
Cohesion
• Cohesion is an extension of the information hiding concept.
• A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.
7. Refinement
• Refinement is a top-down design approach.
• It is a process of elaboration.
• A program is established for refining levels of procedural details.
• A hierarchy is established by decomposing a statement of function in a stepwise manner till the
programming language statement is reached.
8. Refactoring
• It is a reorganization technique which simplifies the design of components without changing its
function behaviour.
• Refactoring is the process of changing the software system in a way that it does not change the
external behaviour of the code still improves its internal structure.
9. Design classes
• The model of software is defined as a set of design classes.
• Every class describes the elements of problem domain and that focus on features of the problem
which are user visible.
Design Notations:
Data Flow Diagram:
DFD Symbols:
External
Entity A producer or consumer of information that resides outside the bounds of
Data Object A data object; the arrowhead indicates the direction data of data flow.
Data Store A repository of data that is to be stored for use by one or more processes;
Example: DFD
Structured Flow Chart:
A Structure Chart (SC) in software engineering and organizational theory is a chart which shows
the breakdown of a system to its lowest manageable levels. They are used in structured programming
to arrange program modules into a tree. Each module is represented by a box, which contains the
module's name.
• A structure chart (module chart, hierarchy chart) is a graphic depiction of the decomposition of
a problem. It is a tool to aid in software design. It is particularly helpful on large problems.
• A structure chart illustrates the partitioning of a problem into subproblems and shows the
hierarchical relationships among the parts. A classic "organization chart" for a company is an
example of a structure chart.
• The top of the chart is a box representing the entire problem, the bottom of the chart shows a
number of boxes representing the less complicated subproblems. (Left-right on the chart is
irrelevant.)
• A structure chart is NOT a flowchart. It has nothing to do with the logical sequence of tasks. It
does NOT show the order in which tasks are performed. It does NOT illustrate an algorithm.
• Each block represents some function in the system, and thus should contain a verb phrase, e.g.
"Print report heading."
• Unit of execution
Module Module Module
• accepts parameters as inputs
• Label: verb
Special Modules:
• “Macro” module
• avoid
• Multi-entry module
• avoid
Condition:
• Connector element
• No split or join
• NO label
Loop: Module
• Call of subordinate modules runs in a loop
• No label or condition
• Loop (and its condition) is defined in the module specification Module Module Module
• Module specification is the decisive element
Data Flow:
Control Flow:
• Flow of control (<> invocation) ==> control execution path of targeted module
• Bound to invocation
• Has a direction Module
• No splits or joins
• Label: flag, decision, condition
• Specified in data-dictionary
Module
Decision Table:
A decision table is a good way to deal with combinations of things (e.g. inputs).
• Decision tables provide a systematic way of stating complex business rules, which is useful for
developers as well as for testers.
• Decision tables can be used in test design whether or not they are used in specifications, as
they help testers explore the effects of combinations of different inputs and other software
states that must correctly implement business rules.
• It helps the developers to do a better job can also lead to better relationships with them. Testing
combinations can be a challenge, as the number of combinations can often be huge.
• Testing is a set of activities which are decided in advance i.e before the start of development and
organized systematically.
• In the literature of software engineering various testing strategies to implement the testing are
defined.
• All the strategies give a testing template.
Verification Validation
Verification is the process to find whether the The validation process is checked whether the
software meets the specified requirements for software meets requirements and expectation
particular phase. of the customer.
It estimates an intermediate product. It estimates the final product.
The objective of verification is to check whether The objectives of the validation are to check
software is constructed according to requirement whether the specifications are correct and
and design specification. satisfy the business need.
It describes whether the outputs are as per the It explains whether they are accepted by the
inputs or not. user or not.
Verification is done before the validation. It is done after the verification.
Plans, requirement, specification, code are Actual product or software is tested under
evaluated during the verifications. validation.
It manually checks the files and document. It is a computer software or developed
program based checking of files and
document.
Strategy of testing
Unit testing
Unit testing starts at the centre and each unit is implemented in source code.
Integration testing
An integration testing focuses on the construction and design of the software.
Validation testing
Check all the requirements like functional, behavioral and performance requirement are validate
against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
These steps are shown in following figure:
Level Summary
Unit Testing A level of the software testing process where individual units of software are tested.
The purpose is to validate that each unit of the software performs as designed.
Integration A level of the software testing process where individual units are combined and tested
Testing as a group. The purpose of this level of testing is to expose faults in the interaction
between integrated units.
System A level of the software testing process where a complete, integrated system is tested.
Testing The purpose of this test is to evaluate the system’s compliance with the specified
requirements.
Acceptance A level of the software testing process where a system is tested for acceptability. The
Testing purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
Unit Testing
• Unit testing focus on the smallest unit of software design, i.e module or software component.
• Test strategy conducted on each module interface to access the flow of input and output.
• The local data structure is accessible to verify integrity during execution.
• Boundary conditions are tested.
• In which all error handling paths are tested.
• An Independent path is tested.
• Following figure shows the unit testing:
Stub Driver
Stub is considered as subprogram. It is a simple main program.
Stub does not accept test case data. Driver accepts test case data.
It replaces the modules of the program into Pass the data to the tested components and
subprograms and is tested by the next driver. print the returned result.
This method is named so because the software program, in the eyes of the tester, is like a black box;
inside which one cannot see. This method attempts to find errors in the following categories:
Definition by ISTQB
• Black box testing: Testing, either functional or non-functional, without reference to the
internal structure of the component or system.
• Black box test design technique: Procedure to derive and/or select test cases based on an
analysis of the specification, either functional or non-functional, of a component or system
without reference to its internal structure.
Example
A tester, without knowledge of the internal structures of a website, tests the web pages by using a
browser; providing inputs (clicks, keystrokes) and verifying the outputs against the expected outcome.
Levels Applicable To
Black Box testing method is applicable to the following levels of software testing:
• Integration Testing
• System Testing
• Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black-box testing
method comes into use.
Techniques
Following are some techniques that can be used for designing black box tests.
• Equivalence Partitioning: It is a software test design technique that involves dividing input
values into valid and invalid partitions and selecting representative values from each partition
as test data.
• Boundary Value Analysis: It is a software test design technique that involves the determination
of boundaries for input values and selecting values that are at the boundaries and just inside/
outside of the boundaries as test data.
• Cause-Effect Graphing: It is a software test design technique that involves identifying the
cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph, and
generating test cases accordingly.
Advantages
• Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
• Tester need not know programming languages or how the software has been implemented.
• Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
• Test cases can be designed as soon as the specifications are complete.
Disadvantages
• Only a small number of possible inputs can be tested and many program paths will be left
untested.
• Without clear specifications, which are the situation in many projects, test cases will be
difficult to design.
• Tests can be redundant if the software designer/developer has already run a test case.
• Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case
in Black Box Testing.
• White-box testing: Testing based on an analysis of the internal structure of the component or
system.
• White-box test design technique: Procedure to derive and/or select test cases based on an
analysis of the internal structure of a component or system.
Example
A tester, usually a developer as well, studies the implementation code of a certain field on a webpage,
determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the
expected outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car is not
moving.
Levels Applicable To
White Box Testing method is applicable to the following levels of software testing:
Advantages
• Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
• Testing is more thorough, with the possibility of covering most paths.
Disadvantages
• Since tests can be very complex, highly skilled resources are required, with a thorough
knowledge of programming and implementation.
• Test script maintenance can be a burden if the implementation changes too frequently.
• Since this method of testing is closely tied to the application being tested, tools to cater to
every kind of implementation/platform may not be readily available.
Test Documentation:
Test Case
A TEST CASE is a set of conditions or variables under which a tester will determine whether a
system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the requirements or design of an
application.
Test Case Template
A test case can have the following elements. Note, however, that a test management tool is normally
used by companies and the format is determined by the tool used.
Test Suite ID The ID of the test suite to which this test case belongs.
Prerequisites Any prerequisites or preconditions that must be fulfilled prior to executing the
test.
Test Data The test data, or links to the test data, that are to be used while conducting the test.
Actual Result The actual result of the test; to be filled after executing the test.
Status Pass or Fail. Other statuses can be ‘Not Executed’ if testing is not performed and
‘Blocked’ if testing is blocked.
Remarks Any comments on the test case or test execution.
Test Plan
A TEST PLAN is a document describing software testing scope and activities. It is the basis for
formally testing any software/product in a project.
• Test plan: A document describing the scope, approach, resources and schedule of intended test
activities. It identifies amongst others test items, the features to be tested, the testing tasks, who
will do each task, degree of tester independence, the test environment, the test design
techniques and entry and exit criteria to be used, and the rationale for their choice, and any
risks requiring contingency planning. It is a record of the test planning process.
• Master test plan: A test plan that typically addresses multiple test levels.
• Phase test plan: A test plan that typically addresses one test phase.
Test Plan Types
One can have the following types of test plans:
• Master Test Plan: A single high-level test plan for a project/product that unifies all other test
plans.
• Testing Level Specific Test Plans: Plans for each level of testing.
o Unit Test Plan
o Integration Test Plan
o System Test Plan
o Acceptance Test Plan
• Testing Type Specific Test Plans: Plans for major types of testing like Performance Test Plan
and Security Test Plan.
• Provide a unique identifier for the document. (Adhere to the Configuration Management
System if you have one.)
Introduction:
References:
• List the related documents, with links to them if available, including the following:
o Project Plan
o Configuration Management Plan
Test Items:
Features to be tested:
Approach:
• Specify the criteria that will be used to determine whether each test item (software/product) has
passed or failed testing.
Test Deliverables:
• List test deliverables, and links to them if available, including the following:
o Test Plan (this document itself)
o Test Cases
o Test Scripts
o Defect/Enhancement Logs
o Test Reports
Test Environment:
Estimate:
• Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed
estimation.
Schedule:
• Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the
detailed schedule.
Responsibilities:
Risks:
• List the assumptions that have been made during the preparation of this plan.
• List the dependencies.
Approvals:
• Specify the names and roles of all persons who must approve the plan.
• Provide space for signatures and dates. (If the document is to be printed.)
• Make the plan concise. Avoid redundancy and super fluousness. If you think you do not need a
section that has been mentioned in the template above, go ahead and delete that section in your
test plan.
• Be specific. For example, when you specify an operating system as a property of a test
environment, mention the OS Edition/Version as well, not just the OS Name.
• Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
• Have the test plan reviewed a number of times prior to baselining it or sending it for approval.
The quality of your test plan speaks volumes about the quality of the testing you or your team
is going to perform.
• Update the plan as and when necessary. An out-dated and unused document stinks and is worse
than not having the document in the first place.
Defect
A Software DEFECT / BUG is a condition in a software product which does not meet a software
requirement (as stated in the requirement specifications) or end-user expectation (which may not be
specified but is reasonable). In other words, a defect is an error in coding or logic that causes a
program to malfunction or to produce incorrect/unexpected results.
Classification
Software Defects/ Bugs are normally classified as per:
• Module/Component A
• Module/Component B
• Module/Component C
• …
Phase Detected
Phase Detected indicates the phase in the software development lifecycle where the defect was
identified.
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
Phase Injected
Phase Injected indicates the phase in the software development lifecycle where the bug was
introduced. Phase Injected is always earlier in the software development lifecycle than the Phase
Detected. Phase Injected can be known only after a proper root-cause analysis of the bug.
• Requirements Development
• High Level Design
• Detailed Design
• Coding
• Build/Deployment
Defect Report
DEFECT REPORT is a document that identifies and describes a defect detected by a tester. The
purpose of a defect report is to state the problem as clearly as possible so that developers can replicate
the defect easily and fix it.
Module Specific module of the product where the defect was detected.
Detected Build Version Build version of the product where the defect was detected (e.g. 1.2.3.5)
Steps to Replicate Step by step description of the way to reproduce the defect. Number the
steps.
Actual Result The actual result you received when you followed the steps.
Assigned To The name of the person that is assigned to analyze/fix the defect.
Fixed Build Version Build version of the product where the defect was fixed (e.g. 1.2.3.9)
• Be specific:
o Specify the exact action: Do not say something like ‘Select ButtonB’. Do you mean
‘Click ButtonB’ or ‘Press ALT+B’ or ‘Focus on ButtonB and click ENTER’? Of
course, if the defect can be arrived at by using all the three ways, it’s okay to use a
generic term as ‘Select’ but bear in mind that you might just get the fix for the ‘Click
ButtonB’ scenario. [Note: This might be a highly unlikely example but it is hoped that
the message is clear.]
o In case of multiple paths, mention the exact path you followed: Do not say something
like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.” Understanding all the
paths at once will be difficult. Instead, say “Do ‘A and X’ and you get D.” You can, of
course, mention elsewhere in the report that “D can also be got if you do ‘B and Y’ or
‘C and Z’.”
o Do not use vague pronouns: Do not say something like “In ApplicationA, open X, Y,
and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’, or ‘X’ or
‘ApplicationA’?”
• Be detailed:
o Provide more information (not less). In other words, do not be lazy. Developers may or
may not use all the information you provide but they sure do not want to beg you for
any information you have missed.
• Be objective:
o Do not make subjective statements like “This is a lousy application” or “You fixed it
real bad.”
o Stick to the facts and avoid the emotions.
• Reproduce the defect:
o Do not be impatient and file a defect report as soon as you uncover a defect. Replicate
it at least once more to be sure. (If you cannot replicate it again, try recalling the exact
test condition and keep trying. However, if you cannot replicate it again after many
trials, finally submit the report for further investigation, stating that you are unable to
reproduce the defect anymore and providing any evidence of the defect if you had
gathered. )
• Review the report:
o Do not hit ‘Submit’ as soon as you write the report. Review it at least once. Remove
any typos.