Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit III

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

UNIT-III

Software Modeling & Design


(Marks-14)
Translating requirement Model into design Model
Concepts of data modeling

• Analysis modeling starts with the data modeling.


• The software engineer defines all the data object that proceeds within the system and the relationship
between data objects are identified.
Data objects
• The data object is the representation of composite information.
• The composite information means an object has a number of different properties or attribute.
For example, Height is a single value so it is not a valid data object, but dimensions contain the
height, the width and depth these are defined as an object.
Data Attributes
Each of the data object has a set of attributes.

Data object has the following characteristics:


• Name an instance of the data object.
• Describe the instance.
• Make reference to another instance in another table.
Relationship
Relationship shows the relationship between data objects and how they are related to each other.

Cardinality
Cardinality state the number of events of one object related to the number of events of another object.

The cardinality expressed as:

One to one (1:1)


One event of an object is related to one event of another object.
For example, one employee has only one ID.

One to many (1: N)


One event of an object is related to many events.
For example, one collage has many departments.
Many to many (M: N)
Many events of one object are related to many events of another object.

For example, many customer place order for many products.

Modality
• If an event relationship is an optional then the modality of relationship is zero.
• If an event of relationship is compulsory then modality of relationship is one.

Analysis Modeling

• Analysis model operates as a link between the 'system description' and the 'design model'.
• In the analysis model, information, functions and the behaviour of the system is defined and these are
translated into the architecture, interface and component level design in the 'design modeling'.

Elements of the analysis model

1. Scenario based element


• This type of element represents the system user point of view.
• Scenario based elements are use case diagram, user stories.
2. Class based elements
• The object of this type of element manipulated by the system.
• It defines the object, attributes and relationship.
• The collaboration is occurring between the classes.
• Class based elements are the class diagram, collaboration diagram.
3. Behavioral elements
• Behavioral elements represent state of the system and how it is changed by the external events.
• The behavioral elements are sequenced diagram, state diagram.
4. Flow oriented elements
• An information flows through a computer-based system it gets transformed.
• It shows how the data objects are transformed while they flow between the various system functions.
• The flow elements are data flow diagram, control flow diagram.
Analysis Rules of Thumb

The rules of thumb that must be followed while creating the analysis model.

The rules are as follows:


• The model focuses on the requirements in the business domain. The level of abstraction must be high
i.e. there is no need to give details.
• Every element in the model helps in understanding the software requirement and focus on the
information, function and behavior of the system.
• The consideration of infrastructure and nonfunctional model delayed in the design.
For example, the database is required for a system, but the classes, functions and behavior of the
database are not initially required. If these are initially considered then there is a delay in the
designing.
• Throughout the system minimum coupling is required. The interconnections between the modules are
known as 'coupling'.
• The analysis model gives value to all the people related to model.
• The model should be simple as possible. Because simple model always helps in easy understanding
of the requirement.
Design Modeling
Fundamental Design Concepts

The set of fundamental software design concepts are as follows:

1. Abstraction

• A solution is stated in large terms using the language of the problem environment at the highest level
abstraction.
• The lower level of abstraction provides a more detail description of the solution.
• A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
• A collection of data that describes a data object is a data abstraction.
2. Architecture
• The complete structure of the software is known as software architecture.
• Structure provides conceptual integrity for a system in a number of ways.
• The architecture is the structure of program modules where they interact with each other in a
specialized way.
• The components use the structure of data.
• The aim of the software design is to obtain an architectural framework of a system.
• The more detailed design activities are conducted from the framework.
3. Patterns
A design pattern describes a design structure and that structure solves a particular design problem in a
specified content.

4. Modularity
• Software is separately divided into name and addressable components. Sometime they are called as
modules which integrate to satisfy the problem requirements.
• Modularity is the single attribute of software that permits a program to be managed easily.
5. Information hiding
Modules must be specified and designed so that the information like algorithm and data presented in a
module is not accessible for other modules not requiring that information.
6. Functional independence
• The functional independence is the concept of separation and related to the concept of modularity,
abstraction and information hiding.
• The functional independence is accessed using two criteria i.e. Cohesion and coupling.
Cohesion
• Cohesion is an extension of the information hiding concept.
• A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.

7. Refinement
• Refinement is a top-down design approach.
• It is a process of elaboration.
• A program is established for refining levels of procedural details.
• A hierarchy is established by decomposing a statement of function in a stepwise manner till the
programming language statement is reached.
8. Refactoring
• It is a reorganization technique which simplifies the design of components without changing its
function behaviour.
• Refactoring is the process of changing the software system in a way that it does not change the
external behaviour of the code still improves its internal structure.
9. Design classes
• The model of software is defined as a set of design classes.
• Every class describes the elements of problem domain and that focus on features of the problem
which are user visible.

Design Notations:
Data Flow Diagram:

• A notation developed in conjunction with structured systems analysis/structured design


(SSA/SD).
• Used primarily for pipe-and-filter styles of architecture.
• Graph–based diagrammatic notation.
• There are extensions for real-time systems that distinguish control

DFD Symbols:

External
Entity A producer or consumer of information that resides outside the bounds of

the system to be modeled.


Process A transformation of information (a function) that resides within the bounds

of the system to be modeled.

Data Object A data object; the arrowhead indicates the direction data of data flow.

Data Store A repository of data that is to be stored for use by one or more processes;

may be as simple as a buffer or queue or as sophisticated as a relational database.

Example: DFD
Structured Flow Chart:
A Structure Chart (SC) in software engineering and organizational theory is a chart which shows
the breakdown of a system to its lowest manageable levels. They are used in structured programming
to arrange program modules into a tree. Each module is represented by a box, which contains the
module's name.

• A structure chart (module chart, hierarchy chart) is a graphic depiction of the decomposition of
a problem. It is a tool to aid in software design. It is particularly helpful on large problems.
• A structure chart illustrates the partitioning of a problem into subproblems and shows the
hierarchical relationships among the parts. A classic "organization chart" for a company is an
example of a structure chart.
• The top of the chart is a box representing the entire problem, the bottom of the chart shows a
number of boxes representing the less complicated subproblems. (Left-right on the chart is
irrelevant.)
• A structure chart is NOT a flowchart. It has nothing to do with the logical sequence of tasks. It
does NOT show the order in which tasks are performed. It does NOT illustrate an algorithm.
• Each block represents some function in the system, and thus should contain a verb phrase, e.g.
"Print report heading."

A structure chart for ATM Machine


Symbols used in Construction of Structure Charts:
Module: Module Control Module

• Process / subroutine / task

• Unit of execution
Module Module Module
• accepts parameters as inputs

• produces parameters as outputs

• Parameters: data or control

• can be invoked and can invoke

• Label: verb

• linked to module specification

Special Modules:

• Predefined (reused) module


• highly useful

• “Macro” module
• avoid

• Multi-entry module
• avoid

Condition:

• Call of subordinate module depends on a condition Module


• No label
• Condition is defined in the module specification
• Module specification is the decisive element
Module Module
Jump:

• call of subordinate module Module

• Connector element

• NOT a data flow

• One specific form of control flow


Module
• has a direction

• No split or join

• NO label

Loop: Module
• Call of subordinate modules runs in a loop
• No label or condition
• Loop (and its condition) is defined in the module specification Module Module Module
• Module specification is the decisive element

Data Flow:

• Flow of information Module


• Data transfer
• Bound to invocation
• Has a direction
• No splits or joins Module
• Label: noun
• Specified in data-dictionary

Control Flow:

• Flow of control (<> invocation) ==> control execution path of targeted module
• Bound to invocation
• Has a direction Module
• No splits or joins
• Label: flag, decision, condition
• Specified in data-dictionary

Module
Decision Table:
A decision table is a good way to deal with combinations of things (e.g. inputs).

This technique is sometimes also referred to as a ’cause-effect’ table.

• Decision tables provide a systematic way of stating complex business rules, which is useful for
developers as well as for testers.
• Decision tables can be used in test design whether or not they are used in specifications, as
they help testers explore the effects of combinations of different inputs and other software
states that must correctly implement business rules.
• It helps the developers to do a better job can also lead to better relationships with them. Testing
combinations can be a challenge, as the number of combinations can often be huge.

Credit card example:


Let’s take another example. If you are a new customer and you want to open a credit card
account then there are three conditions first you will get a 15% discount on all your purchases
today, second if you are an existing customer and you hold a loyalty card, you get a 10%
discount and third if you have a coupon, you can get 20% off today (but it can’t be used with
the ‘new customer’ discount).

• Discount amounts are added, if applicable. This is shown in Table 4.8.


• TABLE 4.8 Decision table for credit card example
Introduction to Testing:

• Testing is a set of activities which are decided in advance i.e before the start of development and
organized systematically.
• In the literature of software engineering various testing strategies to implement the testing are
defined.
• All the strategies give a testing template.

Following are the characteristic that process the testing templates:


• The developer should conduct the successful technical reviews to perform the testing successful.
• Testing starts with the component level and work from outside toward the integration of the whole
computer based system.
• Different testing techniques are suitable at different point in time.
• Testing is organized by the developer of the software and by an independent test group.
• Debugging and testing are different activities, and then also the debugging should be accommodated
in any strategy of testing.

Difference between Verification and Validation

Verification Validation
Verification is the process to find whether the The validation process is checked whether the
software meets the specified requirements for software meets requirements and expectation
particular phase. of the customer.
It estimates an intermediate product. It estimates the final product.
The objective of verification is to check whether The objectives of the validation are to check
software is constructed according to requirement whether the specifications are correct and
and design specification. satisfy the business need.
It describes whether the outputs are as per the It explains whether they are accepted by the
inputs or not. user or not.
Verification is done before the validation. It is done after the verification.
Plans, requirement, specification, code are Actual product or software is tested under
evaluated during the verifications. validation.
It manually checks the files and document. It is a computer software or developed
program based checking of files and
document.
Strategy of testing

A strategy of software testing is shown in the context of spiral.

Following figure shows the testing strategy:

Unit testing
Unit testing starts at the centre and each unit is implemented in source code.

Integration testing
An integration testing focuses on the construction and design of the software.

Validation testing
Check all the requirements like functional, behavioral and performance requirement are validate
against the construction software.

System testing
System testing confirms all system elements and performance are tested entirely.
These steps are shown in following figure:

Software Testing Levels


SOFTWARE TESTING LEVELS are the different stages of the software development lifecycle
where testing is conducted. There are four levels of software testing: Unit >> Integration >> System
>> Acceptance.
Levels

Level Summary

Unit Testing A level of the software testing process where individual units of software are tested.
The purpose is to validate that each unit of the software performs as designed.

Integration A level of the software testing process where individual units are combined and tested
Testing as a group. The purpose of this level of testing is to expose faults in the interaction
between integrated units.

System A level of the software testing process where a complete, integrated system is tested.
Testing The purpose of this test is to evaluate the system’s compliance with the specified
requirements.

Acceptance A level of the software testing process where a system is tested for acceptability. The
Testing purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.

Unit Testing

• Unit testing focus on the smallest unit of software design, i.e module or software component.
• Test strategy conducted on each module interface to access the flow of input and output.
• The local data structure is accessible to verify integrity during execution.
• Boundary conditions are tested.
• In which all error handling paths are tested.
• An Independent path is tested.
• Following figure shows the unit testing:

Unit test environment

The unit test environment is as shown in following figure:


Difference between stub and driver

Stub Driver
Stub is considered as subprogram. It is a simple main program.
Stub does not accept test case data. Driver accepts test case data.
It replaces the modules of the program into Pass the data to the tested components and
subprograms and is tested by the next driver. print the returned result.

Black Box Testing


BLACK BOX TESTING, also known as Behavioral Testing is a software testing method in which the
internal structure/design/implementation of the item being tested is not known to the tester. These tests
can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a black box;
inside which one cannot see. This method attempts to find errors in the following categories:

• Incorrect or missing functions


• Interface errors
• Errors in data structures or external database access
• Behavior or performance errors
• Initialization and termination errors

Definition by ISTQB

• Black box testing: Testing, either functional or non-functional, without reference to the
internal structure of the component or system.
• Black box test design technique: Procedure to derive and/or select test cases based on an
analysis of the specification, either functional or non-functional, of a component or system
without reference to its internal structure.
Example
A tester, without knowledge of the internal structures of a website, tests the web pages by using a
browser; providing inputs (clicks, keystrokes) and verifying the outputs against the expected outcome.

Levels Applicable To
Black Box testing method is applicable to the following levels of software testing:

• Integration Testing
• System Testing
• Acceptance Testing

The higher the level, and hence the bigger and more complex the box, the more black-box testing
method comes into use.

Techniques
Following are some techniques that can be used for designing black box tests.

• Equivalence Partitioning: It is a software test design technique that involves dividing input
values into valid and invalid partitions and selecting representative values from each partition
as test data.
• Boundary Value Analysis: It is a software test design technique that involves the determination
of boundaries for input values and selecting values that are at the boundaries and just inside/
outside of the boundaries as test data.
• Cause-Effect Graphing: It is a software test design technique that involves identifying the
cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph, and
generating test cases accordingly.

Advantages

• Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
• Tester need not know programming languages or how the software has been implemented.
• Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
• Test cases can be designed as soon as the specifications are complete.

Disadvantages

• Only a small number of possible inputs can be tested and many program paths will be left
untested.
• Without clear specifications, which are the situation in many projects, test cases will be
difficult to design.
• Tests can be redundant if the software designer/developer has already run a test case.
• Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case
in Black Box Testing.

White Box Testing:


WHITE BOX TESTING (also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method in
which the internal structure/design/implementation of the item being tested is known to the tester. The
tester chooses inputs to exercise paths through the code and determines the appropriate outputs.
Programming know-how and the implementation knowledge is essential. White box testing is testing
beyond the user interface and into the nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a
white/transparent box; inside which one clearly sees.
Definition by ISTQB (International Software Testing Qualifications Board))

• White-box testing: Testing based on an analysis of the internal structure of the component or
system.
• White-box test design technique: Procedure to derive and/or select test cases based on an
analysis of the internal structure of a component or system.

Example
A tester, usually a developer as well, studies the implementation code of a certain field on a webpage,
determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the
expected outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car is not
moving.

Levels Applicable To
White Box Testing method is applicable to the following levels of software testing:

• Unit Testing: For testing paths within a unit.


• Integration Testing: For testing paths between units.
• System Testing: For testing paths between subsystems.

However, it is mainly applied to Unit Testing.

Advantages

• Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
• Testing is more thorough, with the possibility of covering most paths.
Disadvantages

• Since tests can be very complex, highly skilled resources are required, with a thorough
knowledge of programming and implementation.
• Test script maintenance can be a burden if the implementation changes too frequently.
• Since this method of testing is closely tied to the application being tested, tools to cater to
every kind of implementation/platform may not be readily available.

Difference between white and black box testing

White-Box Testing Black-box Testing


White-box testing known as glass-box Black-box testing also called as behavioral testing.
testing.
It starts early in the testing process. It is applied in the final stages of testing.
In this testing knowledge of In this testing knowledge of implementation is not needed.
implementation is needed.
White box testing is mainly done by the This testing is done by the testers.
developer.
In this testing, the tester must be In black box testing, testers may or may not be technically
technically sound. sound.
Various white box testing methods are: Various black box testing are:
Basic Path Testing and Control Structure Graph-Based testing method, Equivalence partitioning,
Testing. Boundary Value Analysis, Orthogonal Array Testing.

Test Documentation:

Test Case
A TEST CASE is a set of conditions or variables under which a tester will determine whether a
system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the requirements or design of an
application.
Test Case Template
A test case can have the following elements. Note, however, that a test management tool is normally
used by companies and the format is determined by the tool used.

Test Suite ID The ID of the test suite to which this test case belongs.

Test Case ID The ID of the test case.

Test Case The summary / objective of the test case.


Summary

Related The ID of the requirement this test case relates/traces to.


Requirement

Prerequisites Any prerequisites or preconditions that must be fulfilled prior to executing the
test.

Test Procedure Step-by-step procedure to execute the test.

Test Data The test data, or links to the test data, that are to be used while conducting the test.

Expected Result The expected result of the test.

Actual Result The actual result of the test; to be filled after executing the test.

Status Pass or Fail. Other statuses can be ‘Not Executed’ if testing is not performed and
‘Blocked’ if testing is blocked.
Remarks Any comments on the test case or test execution.

Created By The name of the author of the test case.

Date of Creation The date of creation of the test case.

Executed By The name of the person who executed the test.

Date of Execution The date of execution of the test.

Test The environment (Hardware/Software/Network) in which the test was executed.


Environment

Test Plan
A TEST PLAN is a document describing software testing scope and activities. It is the basis for
formally testing any software/product in a project.

• Test plan: A document describing the scope, approach, resources and schedule of intended test
activities. It identifies amongst others test items, the features to be tested, the testing tasks, who
will do each task, degree of tester independence, the test environment, the test design
techniques and entry and exit criteria to be used, and the rationale for their choice, and any
risks requiring contingency planning. It is a record of the test planning process.
• Master test plan: A test plan that typically addresses multiple test levels.
• Phase test plan: A test plan that typically addresses one test phase.
Test Plan Types
One can have the following types of test plans:

• Master Test Plan: A single high-level test plan for a project/product that unifies all other test
plans.
• Testing Level Specific Test Plans: Plans for each level of testing.
o Unit Test Plan
o Integration Test Plan
o System Test Plan
o Acceptance Test Plan
• Testing Type Specific Test Plans: Plans for major types of testing like Performance Test Plan
and Security Test Plan.

Test Plan Template


The format and content of a software test plan vary depending on the processes, standards, and test
management tools being implemented. Nevertheless, the following format, which is based on IEEE
standard for software test documentation, provides a summary of what a test plan can/should contain.
Test Plan Identifier:

• Provide a unique identifier for the document. (Adhere to the Configuration Management
System if you have one.)

Introduction:

• Provide an overview of the test plan.


• Specify the goals/objectives.
• Specify any constraints.

References:

• List the related documents, with links to them if available, including the following:
o Project Plan
o Configuration Management Plan

Test Items:

• List the test items (software/products) and their versions.

Features to be tested:

• List the features of the software/product to be tested.


• Provide references to the Requirements and/or Design specifications of the features to be tested

Features Not to Be Tested:

• List the features of the software/product which will not be tested.


• Specify the reasons these features won’t be tested.

Approach:

• Mention the overall approach to testing.


• Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods
[Manual/Automated; White Box/Black Box/Gray Box]
Item Pass/Fail Criteria:

• Specify the criteria that will be used to determine whether each test item (software/product) has
passed or failed testing.

Suspension Criteria and Resumption Requirements:

• Specify criteria to be used to suspend the testing activity.


• Specify testing activities which must be redone when testing is resumed.

Test Deliverables:

• List test deliverables, and links to them if available, including the following:
o Test Plan (this document itself)
o Test Cases
o Test Scripts
o Defect/Enhancement Logs
o Test Reports

Test Environment:

• Specify the properties of test environment: hardware, software, network etc.


• List any testing or related tools.

Estimate:

• Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed
estimation.

Schedule:

• Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the
detailed schedule.

Staffing and Training Needs:

• Specify staffing needs by role and required skills.


• Identify training that is necessary to provide those skills, if not already acquired.

Responsibilities:

• List the responsibilities of each team/role/individual.

Risks:

• List the risks that have been identified.


• Specify the mitigation plan and the contingency plan for each risk.

Assumptions and Dependencies:

• List the assumptions that have been made during the preparation of this plan.
• List the dependencies.

Approvals:

• Specify the names and roles of all persons who must approve the plan.
• Provide space for signatures and dates. (If the document is to be printed.)

Test Plan Guidelines

• Make the plan concise. Avoid redundancy and super fluousness. If you think you do not need a
section that has been mentioned in the template above, go ahead and delete that section in your
test plan.
• Be specific. For example, when you specify an operating system as a property of a test
environment, mention the OS Edition/Version as well, not just the OS Name.
• Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
• Have the test plan reviewed a number of times prior to baselining it or sending it for approval.
The quality of your test plan speaks volumes about the quality of the testing you or your team
is going to perform.
• Update the plan as and when necessary. An out-dated and unused document stinks and is worse
than not having the document in the first place.

Defect
A Software DEFECT / BUG is a condition in a software product which does not meet a software
requirement (as stated in the requirement specifications) or end-user expectation (which may not be
specified but is reasonable). In other words, a defect is an error in coding or logic that causes a
program to malfunction or to produce incorrect/unexpected results.

• A program that contains a large number of bugs is said to be buggy.


• Reports detailing bugs in software are known as bug reports. (See Defect Report)
• Applications for tracking bugs are known as bug tracking tools.
• The process of finding the cause of bugs is known as debugging.
• The process of intentionally injecting bugs in a software program, to estimate test coverage by
monitoring the detection of those bugs, is known as bebugging.
Software Testing proves that defects exist but NOT that defects do not exist.

Classification
Software Defects/ Bugs are normally classified as per:

• Severity / Impact (See Defect Severity)


• Probability / Visibility (See Defect Probability)
• Priority / Urgency (See Defect Priority)
• Related Dimension of Quality (See Dimensions of Quality)
• Related Module / Component
• Phase Detected
• Phase Injected

Related Module /Component


Related Module / Component indicates the module or component of the software where the defect was
detected. This provides information on which module / component is buggy or risky.

• Module/Component A
• Module/Component B
• Module/Component C
• …

Phase Detected
Phase Detected indicates the phase in the software development lifecycle where the defect was
identified.

• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing

Phase Injected
Phase Injected indicates the phase in the software development lifecycle where the bug was
introduced. Phase Injected is always earlier in the software development lifecycle than the Phase
Detected. Phase Injected can be known only after a proper root-cause analysis of the bug.

• Requirements Development
• High Level Design
• Detailed Design
• Coding
• Build/Deployment

Defect Report
DEFECT REPORT is a document that identifies and describes a defect detected by a tester. The
purpose of a defect report is to state the problem as clearly as possible so that developers can replicate
the defect easily and fix it.

Defect Report Template


In most companies, a defect reporting tool is used and the elements of a report can vary. However, in
general, a defect report can consist of the following elements.

ID Unique identifier given to the defect. (Usually, automated)

Project Project name.

Product Product name.

Release Version Release version of the product. (e.g. 1.2.3)

Module Specific module of the product where the defect was detected.

Detected Build Version Build version of the product where the defect was detected (e.g. 1.2.3.5)

Summary Summary of the defect. Keep this clear and concise.

Description Detailed description of the defect. Describe as much as possible but


without repeating anything or using complex words. Keep it simple but
comprehensive.

Steps to Replicate Step by step description of the way to reproduce the defect. Number the
steps.

Actual Result The actual result you received when you followed the steps.

Expected Results The expected results.

Attachments Attach any additional information like screenshots and logs.

Remarks Any additional comments on the defect.

Defect Severity Severity of the Defect. (See Defect Severity)

Defect Priority Priority of the Defect. (See Defect Priority)

Reported By The name of the person who reported the defect.

Assigned To The name of the person that is assigned to analyze/fix the defect.

Status The status of the defect. (See Defect Life Cycle)

Fixed Build Version Build version of the product where the defect was fixed (e.g. 1.2.3.9)

Test Summery Report


It is essential that you report defects effectively so that time and effort is not unnecessarily wasted in
trying to understand and reproduce the defect. Here are some guidelines:

• Be specific:
o Specify the exact action: Do not say something like ‘Select ButtonB’. Do you mean
‘Click ButtonB’ or ‘Press ALT+B’ or ‘Focus on ButtonB and click ENTER’? Of
course, if the defect can be arrived at by using all the three ways, it’s okay to use a
generic term as ‘Select’ but bear in mind that you might just get the fix for the ‘Click
ButtonB’ scenario. [Note: This might be a highly unlikely example but it is hoped that
the message is clear.]
o In case of multiple paths, mention the exact path you followed: Do not say something
like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.” Understanding all the
paths at once will be difficult. Instead, say “Do ‘A and X’ and you get D.” You can, of
course, mention elsewhere in the report that “D can also be got if you do ‘B and Y’ or
‘C and Z’.”
o Do not use vague pronouns: Do not say something like “In ApplicationA, open X, Y,
and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’, or ‘X’ or
‘ApplicationA’?”
• Be detailed:
o Provide more information (not less). In other words, do not be lazy. Developers may or
may not use all the information you provide but they sure do not want to beg you for
any information you have missed.
• Be objective:
o Do not make subjective statements like “This is a lousy application” or “You fixed it
real bad.”
o Stick to the facts and avoid the emotions.
• Reproduce the defect:
o Do not be impatient and file a defect report as soon as you uncover a defect. Replicate
it at least once more to be sure. (If you cannot replicate it again, try recalling the exact
test condition and keep trying. However, if you cannot replicate it again after many
trials, finally submit the report for further investigation, stating that you are unable to
reproduce the defect anymore and providing any evidence of the defect if you had
gathered. )
• Review the report:
o Do not hit ‘Submit’ as soon as you write the report. Review it at least once. Remove
any typos.

You might also like