Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SE Unit 3,4,5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Unit-5 –Software Design

(1) Design Concepts and Principles


 Software design sits at the technical core of software engineering and is applied regardless of the
software process model that is used.
 The design task produces a data design, an architectural design, an interface design, and a component
design.
Data Design
 The data design transforms the information domain model created during analysis into the data
structures that will be required to implement the software.
 The data objects and relationships defined in the entity relationship diagram.
 Part of data design may occur in combination with the design of software architecture.
Architectural Design
 The architectural design defines the relationship between major structural elements of the software.
 The architectural design representation—the framework of a computer-based system—can be derived
from the system specification, the analysis model, and the interaction of subsystems defined within the
analysis model.
Interface Design
 The interface design describes how the software communicates within itself, with systems that
interoperate with it, and with humans who use it.
 An interface implies a flow of information (e.g., data and/or control) and a specific type of behavior.
Therefore, data and control flow diagrams provide much of the information required for interface design.
Component-level Design
 The component-level design transforms structural elements of the software architecture into a
procedural description of software components.

Design principles
1. The design process should not suffer from “tunnel vision.”
2. The design should be traceable to the analysis model.
3. The design should not reinvent the wheel.
4. The design should “minimize the intellectual distance” between the software and the problem as it exists
in the real world.
5. The design should exhibit uniformity and integration.
6. The design should be structured to accommodate change.
7. The design should be structured to degrade gently, even when abnormal data, events, or operating
conditions are encountered.
8. Design is not coding, coding is not design.
9. The design should be assessed for quality as it is being created, not after the fact.
10. The design should be reviewed to minimize conceptual (semantic) errors.

(2) Software architecture and software design


 Architectural design represents the structure of data and program components that are required to build
a computer-based system.
 It considers the architectural style that the system will take, the structure and properties of the
components
 that constitute the system, and the interrelationships that occur among all architectural components of a
system.
 Representations of software architecture are an enabler for communication between all parties
(stakeholders) interested in the development of a computer-based system.
 The architecture highlights early design decisions that will have a reflective impact on all software
engineering work that follows and, as important, on the ultimate success of the system as an operational
entity.
Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 50
Unit-5 –Software Design

 Architecture “constitutes a relatively small, intellectually graspable model of how the system is
structured and how its components work together”

Architectural Styles
The software that is built for computer-based systems also exhibits one of many architectural styles.
Each style describes a system category that encompasses
1 A set of components (e.g., a database, computational modules) that perform a function required by a
system.
2 A set of connectors that enable “communication, coordinations and cooperation” among
components.
3 Constraints that define how components can be integrated to form the system.
4 Semantic models that enable a designer to understand the overall properties of a system by
analyzing the known properties of its constituent parts.
Data-centered architecture style
 A data store (e.g., a file or database) resides at the center of this architecture and is accessed frequently
by other components that update, add, delete, or otherwise modify data within the store.
 Client software accesses a central repository.
 In some cases the data repository is passive.
 That is, client software accesses the data independent of any changes to the data or the actions of other
client software.

Figure: Data-centered architecture


Data-flow architectures
 This architecture is applied when input data are to be transformed through a series of computational or
manipulative components into output data.
 A pipe and filter pattern has a set of components, called filters, connected by pipes that transmit data
from one component to the next.
 Each filter works independently of those components upstream and downstream, is designed to expect
data input of a certain form, and produces data output (to the next filter) of a specified form.
 However, the filter does not require knowledge of the working of its neighboring filters.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 51


Unit-5 –Software Design

Figure: Data-flow architectures


Call and return architecture
 This architectural style enables a software designer (system architect) to achieve a program structure
that is relatively easy to modify and scale.
A number of sub styles exist within this category:
 Main program/subprogram architectures. This classic program structure decomposes function into a
control hierarchy where a “main” program invokes a number of program components, which in turn may
invoke still other components.
 Remote procedure call architectures. The components of a main program/ subprogram architecture are
distributed across multiple computers on a network.
Object-oriented architecture
 The components of a system encapsulate data and the operations that must be applied to manipulate
the data.
 Communication and coordination between components is accomplished via message passing.
Layered architecture
 A number of different layers are defined, each accomplishing operations that progressively become
closer to the machine instruction set.
 At the outer layer, components service user interface operations.
 At the inner layer, components perform operating system interfacing.
 Intermediate layers provide utility services and application software functions.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 52


Unit-5 –Software Design

(3) Data Design


This section describes data design at both the architectural and component levels. At the architecture level,
data design is the process of creating a model of the information represented at a high level of abstraction
(using the customer's view of data).

Data Design at the Architectural Level


 The challenge is extract useful information from the data environment, particularly when the information
desired is cross-functional.
 To solve this challenge, the business IT community has developed data mining techniques, also called
knowledge discovery in databases (KDD), that navigate through existing databases in an attempt to
extract appropriate business-level information.
 However, the existence of multiple databases, their different structures, and the degree of detail
contained with the databases, and many other factors make data mining difficult within an existing
database environment.
 An alternative solution, called a data warehouse, adds on additional layer to the data architecture.
 A data warehouse is a separate data environment that is not directly integrated with day-to-day
applications that encompasses all data used by a business.
Data Design at the Component Level
At the component level, data design focuses on specific data structures required to realize the data objects to
be manipulated by a component.
̶ Refine data objects and develop a set of data abstractions
̶ Implement data object attributes as one or more data structures
̶ Review data structures to ensure that appropriate relationships have been established
Set of principles for data specification:
1. The systematic analysis principles applied to function and behavior should also be applied to data.
2. All data structures and the operations to be performed on each should be identified.
3. A data dictionary should be established and used to define both data and program design.
4. Low level data design decisions should be deferred until late in the design process.
5. The representation of data structure should be known only to those modules that must make direct use
of the data contained within the structure.
6. A library of useful data structures and the operations that may be applied to them should be developed.
7. A software design and programming language should support the specification and realization of abstract
data types.

(4) Component-Level Design or Procedural Design


 Component-level design, also called procedural design, occurs after data, architectural, and interface
designs have been established.
 Component-level design defines the data structures, algorithms, interface characteristics, and
communication mechanisms allocated to each software component.
 The intent is to translate the design model into operational software.
 But the level of abstraction of the existing design model is relatively high, and the abstraction level of the
operational program is low.
 Component a modular, deployable, and replaceable part of a system that encapsulates implementation
and exposes a set of interfaces.”

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 53


Unit-5 –Software Design

Function Oriented Approach


The following are the salient features of a typical function-oriented design approach:
1. A system is viewed as something that performs a set of functions. Starting at this highlevel view of
the system, each function is successively refined into more detailed functions.
For example, consider a function create-new-library member which essentially creates the record for
a new member, assigns a unique membership number to him, and prints a bill towards his
membership charge. This function may consist of the following sub-functions:
̶ assign-membership-number
̶ create-member-record
̶ print-bill
Each of these sub-functions may be split into more detailed sub-functions and so on.
2. The system state is centralized and shared among different functions, e.g. data such as member-
records is available for reference and updating to several functions such as:
̶ create-new-member
̶ delete-member
̶ update-member-record

Object Oriented Approach


• In the object-oriented design approach, the system is viewed as collection of objects (i.e. entities). The
state is decentralized among the objects and each object manages its own state information.
• For example, in a Library Automation Software, each library member may be a separate object with its
own data and functions to operate on these data. In fact, the functions defined for one object cannot
refer or change data of other objects.
• Objects have their own internal data which define their state. Similar objects constitute a class.
• In other words, each object is a member of some class. Classes may inherit features from super class.
Conceptually, objects communicate by message passing.

Function-Oriented Vs. Object-Oriented Design


• Unlike function-oriented design methods, in OOD, the basic abstraction are not real world functions such
as sort, display, track, etc., but real-world entities such as employee, picture, machine, radar system, etc.
• For example in OOD, an employee pay-roll software is not developed by designing functions such as
update-employee record, get-employee-address, etc. but by designing objects such as employees,
departments, etc.
• In object-oriented design, software is not developed by designing functions such as update-employee-
record, get-employee-address, etc., but by designing objects such as employee, department, etc.
• In OOD, state information is not represented in a centralized shared memory but is distributed among
the objects of the system.
• For example, while developing an employee pay-roll system, the employee data such as the names of the
employees, their code numbers, basic salaries, etc. are usually implemented as global data in a
traditional programming system; whereas in an object-oriented system these data are distributed among
different employee objects of the system.
• Objects communicate by passing messages. Therefore, one object may discover the state information of
another object by interrogating it. Of course, somewhere or the other the real-world functions must be
implemented.
• Function-oriented techniques such as SA/SD group functions together if, as a group, they constitute a
higher-level function. On the other hand, object-oriented techniques group functions together on the
basis of the data they operate on.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 54


Unit-5 –Software Design

(5) Cohesion and Coupling


• Cohesion is an indication of the relative functional strength of a module.
• A cohesive module performs a single task, requiring little interaction with other components in other
parts of a program. Stated simply, a cohesive module should (ideally) do just one thing.
• Cohesion is a measure of functional strength of a module.
• A module having high cohesion and low coupling is said to be functionally independent of other modules.
By the term functional independence, we mean that a cohesive module performs a single task or
function.
• Coupling is an indication of the relative interdependence among modules.
• Coupling depends on the interface complexity between modules, the point at which entry or reference is
made to a module, and what data pass across the interface.
• A module having high cohesion and low coupling is said to be functionally independent of other modules.
If two modules interchange large amounts of data, then they are highly interdependent.
• The degree of coupling between two modules depends on their interface complexity.

Classification Cohesion

Coincidental cohesion
• A module is said to have coincidental cohesion, if it performs a set of tasks that relate to each other very
loosely, if at all.
• In this case, the module contains a random collection of functions. It is likely that the functions have
been put in the module out of pure coincidence without any thought or design.
• For example, in a transaction processing system (TPS), the get-input, print-error, and summarize-
members
• functions are grouped into one module.
Logical cohesion
• A module is said to be logically cohesive, if all elements of the module perform similar operations, e.g.
error handling, data input, data output, etc.
• An example of logical cohesion is the case where a set of print functions generating different output
reports are arranged into a single module.
Temporal cohesion
• When a module contains functions that are related by the fact that all the functions must be executed in
the same time span, the module is said to exhibit temporal cohesion.
• The set of functions responsible for initialization, start-up, shutdown of some process, etc. exhibit
temporal cohesion.
Procedural cohesion
• A module is said to possess procedural cohesion, if the set of functions of the module are all part of a
procedure (algorithm) in which certain sequence of steps have to be carried out for achieving an
objective, e.g. the algorithm for decoding a message.

Communicational cohesion
• A module is said to have communicational cohesion, if all functions of the module refer to or update the
same data structure, e.g. the set of functions defined on an array or a stack.
Sequential cohesion
• A module is said to possess sequential cohesion, if the elements of a module form the parts of sequence,
where the output from one element of the sequence is input to the next.
• For example, in a TPS, the get-input, validate-input, sort-input functions are grouped into one module.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 55


Unit-5 –Software Design

Functional cohesion
• Functional cohesion is said to exist, if different elements of a module cooperate to achieve a single
function. For example, a module containing all the functions required to manage employees’ pay-roll
exhibits functional cohesion.
• Suppose a module exhibits functional cohesion and we are asked to describe what the module does,
then we would be able to describe it using a single sentence.

Classification of Coupling

Data coupling
• Two modules are data coupled, if they communicate through a parameter. An example is an elementary
data item passed as a parameter between two modules, e.g. an integer, a float, a character, etc.
• This data item should be problem related and not used for the control purpose.
Stamp coupling
• Two modules are stamp coupled, if they communicate using a composite data item such as a record in
PASCAL or a structure in C.
Control coupling
• Control coupling exists between two modules, if data from one module is used to direct the order of
instructions execution in another.
• An example of control coupling is a flag set in one module and tested in another module.
Common coupling
• Two modules are common coupled, if they share data through some global data items.
Content coupling
• Content coupling exists between two modules, if they share code, e.g. a branch from one module into
another module.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 56


Unit-5 –Software Design

(6) User Interface Design


 User interface design creates an effective communication medium between a human and a computer.
 Following a set of interface design principles, design identifies interface objects and actions and then
creates a screen layout that forms the basis for a user interface prototype.

Design Rules for User Interface


(1) Place the User in Control
During a requirements-gathering session for a major new information system, a key user was asked about
Following are the design principles that allow the user to maintain control:
̶ Define interaction modes in a way that does not force a user into unnecessary or undesired
actions.
An interaction mode is the current state of the interface. For example, if spell check is selected in a
word-processor menu, the software moves to a spell-checking mode. There is no reason to force the
user to remain in spell-checking mode if the user desires to make a small text edit along the way.
̶ Provide for flexible interaction.
Because different users have different interaction preferences, choices should be provided. For
example, software might allow a user to interact via keyboard commands, mouse movement, a
digitizer pen, a multi touch screen, or voice recognition commands.
̶ Allow user interaction to be interruptible and undoable.
Even when involved in a sequence of actions, the user should be able to interrupt the sequence to do
something else.
̶ Streamline interaction as skill levels advance and allow the interaction to be customized.
Users often find that they perform the same sequence of interactions repeatedly.
̶ Hide technical internals from the casual user.
The user interface should move the user into the virtual world of the application. The user should not
be aware of the operating system, file management functions, or other arcane computing
technology.
̶ Design for direct interaction with objects that appear on the screen.
The user feels a sense of control when able to manipulate the objects that are necessary to perform
a task in a manner similar to what would occur if the object were a physical thing.
(2) Reduce the User’s Memory Load
The more a user has to remember, the more error-prone the interaction with the system will be.
Following are the design principles that enable an interface to reduce the user’s memory load:
̶ Reduce demand on short-term memory.
When users are involved in complex tasks, the demand on short-term memory can be significant. The
interface should be designed to reduce the requirement to remember past actions, inputs, and
results.
̶ Establish meaningful defaults.
The initial set of defaults should make sense for the average user, but a user should be able to specify
individual preferences. However, a “reset” option should be available, enabling the redefinition of
original default values.
̶ Define shortcuts that are intuitive.
When mnemonics are used to accomplish a system function, the mnemonic should be tied to the
action in a way that is easy to remember.
̶ The visual layout of the interface should be based on a real-world metaphor.
This enables the user to rely on well-understood visual cues, rather than memorizing an arcane
interaction sequence.
̶ Disclose information in a progressive fashion.
The interface should be organized hierarchically. That is, information about a task, an object, or some
behavior should be presented first at a high level of abstraction.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 57


Unit-5 –Software Design

(3) Make the Interface Consistent


The interface should present and acquire information in a consistent fashion.
Following are the design principles that help make the interface consistent:
̶ Allow the user to put the current task into a meaningful context.
Many interfaces implement complex layers of interactions with dozens of screen images. It is
important to provide indicators that enable the user to know the context of the work at hand.
̶ Maintain consistency across a family of applications.
A set of applications should all implement the same design rules so that consistency is maintained for
all interaction.
̶ If past interactive models have created user expectations, do not make changes unless there is a
compelling reason to do so.
Once a particular interactive sequence has become a de facto standard, the user expects this in every
application he encounters.

User Interface Design Models


Four different models come into play when a user interface is analyzed and designed.
1. User profile model – Established by a human engineer or software engineer
̶ Establishes the profile of the end-users of the system Based on age, gender, physical abilities,
education, cultural or ethnic background, motivation, goals, and personality.
̶ The underlying sense of the application; an understanding of the functions that are performed, the
meaning of input and output, and the objectives of the system Categorizes users as
Novices: No syntactic knowledge of the system, little semantic knowledge of the application, only general
computer usage.
Knowledgeable, intermittent users: Reasonable semantic knowledge of the system, low recall of
syntactic information to use the interface.
Knowledgeable, frequent users:
Good semantic and syntactic knowledge (i.e., power user), look for shortcuts and abbreviated modes of
operation.
2. Design model – Created by a software engineer
̶ Derived from the analysis model of the requirements Incorporates data, architectural, interface, and
procedural representations of the software.
̶ Constrained by information in the requirements specification that helps define the user of the
system.
3. Implementation model – Created by the software implementers
̶ Consists of the look and feel of the interface combined with all supporting information (books,
videos, help files) that describe system syntax and semantics.
̶ Strives to agree with the user's mental model; users then feel comfortable with the software and use
it effectively.
4. User's mental model – Developed by the user when interacting with the application
̶ Often called the user's system perception. Consists of the image of the system that users carry in
their heads.
̶ Accuracy of the description depends upon the user’s profile and overall familiarity with the software
in the application domain.
The role of the interface designer is to merge these differences and derive a consistent representation of the
interface.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 58


Unit-5 –Software Design

(6) Web Application Design


Design for WebApp encompasses technical and nontechnical activities that include: establishing the look and
feel of the WebApp, creating the aesthetic layout of the user interface, defining the overall architectural
structure, developing the content and functionality that reside within the architecture, and planning the
navigation that occurs within the WebApp.

WebApp Design Quality Requirement


Design is the engineering activity that leads to a high-quality product. This leads us to a recurring question
that is encountered in all engineering disciplines.

Web App Interface Design


The objectives of a WebApp interface are to:
(1) Establish a consistent window into the content and functionality provided by the interface.
(2) Guide the user through a series of interactions with the WebApp.
(3) Organize the navigation options and content available to the user.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 59


Unit-5 –Software Design

Aesthetic Design
 Aesthetic design, also called graphic design, is an artistic endeavor that complements the technical
aspects of WebApp design.
 Without it, a WebApp may be functional, but unappealing. With it, a WebApp draws its users into a world
that embraces them on a primitive, as well as an intellectual level.

Content Design
 Content design focuses on two different design tasks, each addressed by individuals with different skill
sets.
 First, a design representation for content objects and the mechanisms required to establish their
relationship to one another is developed.
 In addition, the information within a specific content object is created.
 The latter task may be conducted by copywriters, graphic designers, and others who generate the
content to be used within a WebApp.

Architecture Design
 Architecture design is tied to the goals established for a WebApp, the content to be presented, the users
who will visit, and the navigation philosophy that has been established.
 In most cases, architecture design is conducted in parallel with interface design, aesthetic design, and
content design.
 Because the WebApp architecture may have a strong influence on navigation, the decisions made during
this design action will influence work conducted during navigation design.

Navigation Design
Once the WebApp architecture has been established and the components (pages, scripts, applets, and other
processing functions) of the architecture have been identified, you must define navigation pathways that
enable users to access WebApp content and functions.

Component-Level Design
Modern WebApps deliver increasingly sophisticated processing functions that
1. Perform localized processing to generate content and navigation capability in a dynamic fashion,
2. Provide computation or data processing capability that are appropriate for the WebApp’s business
domain.
3. Provide sophisticated database query and access.
4. Establish data interfaces with external corporate systems.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 60


Unit-6 –Software Coding & Testing

(1) Coding Standard and coding Guidelines

Coding
 Good software development organizations normally require their programmers to adhere to some well-
defined and standard style of coding called coding standards.
 Most software development organizations formulate their own coding standards that suit them most,
and require their engineers to follow these standards rigorously.
 The purpose of requiring all engineers of an organization to adhere to a standard style of coding is the
following:
̶ A coding standard gives a uniform appearance to the codes written by different engineers.
̶ It enhances code understanding.
̶ It encourages good programming practices.
 A coding standard lists several rules to be followed during coding, such as the way variables are to be
named, the way the code is to be laid out, error return conventions, etc.

Coding standards and guidelines


The following are some representative coding standards.
Rules for limiting the use of global:
 These rules list what types of data can be declared global and what cannot.
Contents of the headers preceding codes for different modules:
 The information contained in the headers of different modules should be standard for an organization.
 The exact format in which the header information is organized in the header can also be specified.
 The following are some standard header data:
̶ Name of the module.
̶ Date on which the module was created.
̶ Modification history.
̶ Different functions supported, along with their input/output parameters.
̶ Global variables accessed/modified by the module.
Naming conventions for global variables, local variables, and constant identifiers:
 A possible naming convention can be that global variable names always start with a capital letter, local
variable names are made of small letters, and constant names are always capital letters.
Error return conventions and exception handling mechanisms:
 The way error conditions are reported by different functions in a program are handled should be
standard within an organization.
Do not use a coding style that is too clever or too difficult to understand:
 Code should be easy to understand. Many inexperienced engineers actually take pride in writing cryptic
and incomprehensible code. Clever coding can ambiguous meaning of the code and delay understanding.
Avoid obscure side effects:
 The side effects of a function call include modification of parameters passed by reference, modification
of global variables, and I/O operations.
 An unclear side effect is one that is not obvious from a casual examination of the code.
Do not use an identifier for multiple purposes:
 Programmers often use the same identifier to denote several temporary entities.
 For example, some programmers use a temporary loop variable for computing and a storing the final
result.
 Each variable should be given a descriptive name indicating its purpose.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 61


Unit-6 –Software Coding & Testing

The code should be well-documented:


 As a rule of thumb, there must be at least one comment line on the average for every three-source line.
The length of any function should not exceed 10 source lines:
 A function that is very lengthy is usually very difficult to understand as it probably carries out many
different functions.
Do not use goto statements:
 Use of goto statements makes a program unstructured and makes it very difficult to understand.

(2) Code Review, Code Walk Through, Code Inspection

Code Review
 Code review for a model is carried out after the module is successfully compiled and the all the syntax
errors have been eliminated.
 Code reviews are extremely cost-effective strategies for reduction in coding errors and to produce high
quality code. Normally, two types of reviews are carried out on the code of a module.
 There are two types’ code review techniques are code inspection and code walk through.

Code Walk Through


 Code walk through is an informal code analysis technique.
 In this technique, after a module has been coded, successfully compiled and all syntax errors eliminated.
 A few members of the development team are given the code few days before the walk through meeting
to read and understand code.
 Each member selects some test cases and simulates execution of the code by hand.
 The main objectives of the walk through are to discover the algorithmic and logical errors in the code.
 Even though a code walk through is an informal analysis technique, several guidelines have evolved over
the years for making this naïve but useful analysis technique more effective.

Code Inspection
 In contrast to code walk through, the aim of code inspection is to discover some common types of errors
caused due to oversight and improper programming.
 In other words, during code inspection the code is examined for the presence of certain kinds of errors,
in contrast to the hand simulation of code execution done in code walk through.
 For instance, consider the classical error of writing a procedure that modifies a formal parameter while
the calling routine calls that procedure with a constant actual parameter.
 It is more likely that such an error will be discovered by looking for these kinds of mistakes in the code,
rather than by simply hand simulating execution of the procedure.
 In addition to the commonly made errors, adherence to coding standards is also checked during code
inspection.
 Good software development companies collect statistics regarding different types of errors commonly
committed by their engineers and identify the type of errors most frequently committed.

Software Documentation
 When various kinds of software products are developed then not only the executable files and the source
code are developed but also various kinds of documents such as users’ manual, software requirements
specification (SRS) documents, design documents, test documents, installation manual, etc. are also
developed as part of any software engineering process.
 All these documents are a vital part of good software development practice.
 Different types of software documents can broadly be classified into the following:
̶ Internal documentation
̶ External documentation

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 62


Unit-6 –Software Coding & Testing

 Internal documentation is the code comprehension features provided as part of the source code itself.
 Internal documentation is provided through appropriate module headers and comments embedded in
the source code.
 Internal documentation is also provided through the useful variable names, module and function
headers, code indentation, code structuring, use of enumerated types and constant identifiers, use of
user-defined data types, etc.
 This is of course in contrast to the common expectation that code commenting would be the most useful.
The research finding is obviously true when comments are written without thought.
 But even when code is carefully commented, meaningful variable names still are more helpful in
understanding a piece of code. Good software development organizations usually ensure good internal
documentation by appropriately formulating their coding standards and coding guidelines.
 External documentation is provided through various types of supporting documents such as users’
manual, software requirements specification document, design document, test documents, etc.
 A systematic software development style ensures that all these documents are produced in an orderly
fashion.

(3) Testing Strategies


 Software testing is a critical element of software quality assurance and represents the ultimate review of
specification, design, and code generation.
 It is not unusual for a software development organization to pay between 30 and 40 percent of total
project effort on testing.
 The engineer creates a series of test cases that are intended to "defeat" the software that has been built.
 In fact, testing is the one step in the software process that could be viewed (psychologically, at least) as
destructive rather than constructive.

Type of Testing Approach


 Verification and Validation approach:
 Verification refers to the set of activities that ensure that software correctly implements a specific
function.
 Validation refers to a different set of activities that ensure that the software that has been built is
traceable to customer requirements.
 Verification: "Are we building the product right?"
 Validation: "Are we building the right product?

Testing Strategies

 Initially, system engineering defines the role of software and leads to software requirements analysis,
where the information domain, function, behavior, performance, constraints, and validation criteria for
software are established.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 63


Unit-6 –Software Coding & Testing

 Moving inward along the spiral, you come to design and finally to coding. To develop computer software,
you spiral inward (counterclockwise) along streamlines that decrease the level of abstraction on each
turn.
Unit Testing:

 Unit is the smallest part of a software system which is testable it may include code files, classes and
methods which can be tested individual for correctness.
 Unit is a process of validating such small building block of a complex system, much before testing an
integrated large module or the system as a whole.
 Driver and/or stub software must be developed for each unit test a driver is nothing more than a "main
program" that accepts test case data, passes such data to the component, and prints relevant results.
 Stubs serve to replace modules that are subordinate (called by) the component to be tested.
 A stub or "dummy subprogram" uses the subordinate module's interface.
Integration Testing:
 Integration is defined as a set of integration among component.
 Testing the interactions between the module and interactions with other system externally is called
Integration Testing.
 Testing of integrated modules to verify combined functionality after integration.
 Integration testing addresses the issues associated with the dual problems of verification and program
construction.
 Modules are typically code modules, individual applications, client and server applications on a network,
etc. This type of testing is especially relevant to client/server and distributed systems.
 Types of integration testing are:
̶ Top-down integration
̶ Bottom-up integration
̶ Regression testing
̶ Smoke testing
Validation Testing
 The process of evaluating software during the development process or at the end of the development
process to determine whether it satisfies specified business requirements.
 Validation Testing ensures that the product actually meets the client's needs. It can also be defined as to
demonstrate that the product fulfills its intended use when deployed on appropriate environment.
 Validation testing provides final assurance that software meets all informational, functional, behavioral,
and performance requirements.
 The alpha test is conducted at the developer’s site by a representative group of end users.
 The software is used in a natural setting with the developer “looking over the shoulder” of the users and
recording errors and usage problems.
 Alpha tests are conducted in a controlled environment.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 64


Unit-6 –Software Coding & Testing

 The beta test is conducted at one or more end-user sites.


 Unlike alpha testing, the developer generally is not present.
 Therefore, the beta test is a “live” application of the software in an environment that cannot be
controlled by the developer.
System Testing
 In system testing the software and other system elements are tested as a whole.
 To test computer software, you spiral out in a clockwise direction along streamlines that increase the
scope of testing with each turn.
 System testing verifies that all elements mesh properly and that overall system function/performance is
achieved.
 Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that
recovery is properly performed.
 If recovery is automatic (performed by the system itself), re initialization, check pointing mechanisms,
data recovery, and restart are evaluated for correctness. If recovery requires human intervention, the
mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits.
 Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it
from improper penetration.
 During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.
 Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency,
or volume.
 A variation of stress testing is a technique called sensitivity testing.
 Performance testing is designed to test the run-time performance of software within the context of an
integrated system.
 Performance testing occurs throughout all steps in the testing process.
 Even at the unit level, the performance of an individual module may be assessed as tests are conducted.
 Deployment testing, sometimes called configuration testing, exercises the software in each environment
in which it is to operate.
 In addition, deployment testing examines all installation procedures and specialized installation software
that will be used by customers, and all documentation that will be used to introduce the software to end
users.
Acceptance Testing
 Acceptance Testing is a level of the software testing where a system is tested for acceptability.
 The purpose of this test is to evaluate the system’s compliance with the business requirements and
assess whether it is acceptable for delivery.
 It is a formal testing with respect to user needs, requirements, and business processes conducted to
determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or
other authorized entity to determine whether or not to accept the system.
 Acceptance Testing is performed after System Testing and before making the system available for actual
use.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 65


Unit-6 –Software Coding & Testing

(4) White Box Testing and Black Box Testing

White box testing


 White-box testing, sometimes called glass-box testing, is a test-case design philosophy that uses the
control structure described as part of component-level design to derive test cases.
 Using white-box testing methods, you can derive test cases that
1 Guarantee that all independent paths within a module have been exercised at least once.
2 Exercise all logical decisions on their true and false sides.
3 Execute all loops at their boundaries and within their operational bounds.
4 Exercise internal data structures to ensure their validity.
 White Box Testing method is applicable to the following levels of software testing:
̶ It is mainly applied to Unit testing and Integration testing
̶ Unit Testing: For testing paths within a unit.
̶ Integration Testing: For testing paths between units.
̶ System Testing: For testing paths between subsystems.
White box testing advantages
 Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
 Testing is more thorough, with the possibility of covering most paths.
White box testing disadvantages
 Since tests can be very complex, highly skilled resources are required, with thorough knowledge of
programming and implementation.
 Test script maintenance can be a burden if the implementation changes too frequently.
 Since this method of testing it closely tied with the application being testing, tools to cater to every kind
of implementation/platform may not be readily available.

Black Box Testing


 Black-box testing, also called behavioral testing, focuses on the functional requirements of the software.
 That is, black-box testing techniques enable you to derive sets of input conditions that will fully exercise
all functional requirements for a program.
 Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach
that is likely to uncover a different class of errors than white box methods.
 Black-box testing attempts to find errors in the following categories:
1 Incorrect or missing functions
2 Interface errors
3 Errors in data structures or external database access
4 Behavior or performance errors
5 Initialization and termination errors.
 Black Box Testing method is applicable to the following levels of software testing:
̶ It is mainly applied to System testing and Acceptance testing
̶ Integration Testing
̶ System Testing
̶ Acceptance Testing
 The higher the level, and hence the bigger and more complex the box, the more black box testing
method comes into use.
Black box testing advantages
 Tests are done from a user’s point of view and will help in exposing discrepancies in the specifications.
 Tester need not know programming languages or how the software has been implemented.
 Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
 Test cases can be designed as soon as the specifications are complete.
Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 66
Unit-6 –Software Coding & Testing

Black box testing disadvantages


 Only a small number of possible inputs can be tested and many program paths will be left untested.
 Without clear specifications, which is the situation in many projects, test cases will be difficult to design.
 Tests can be redundant if the software designer/ developer has already run a test case.
 Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in Black
Box Testing.

(5) Quality Function Deployment (QFD)


 Quality function deployment (QFD) is a quality management technique that translates the needs of the
customer into technical requirements for software.
 QFD “concentrates on maximizing customer satisfaction from the software engineering process”
 To accomplish this, QFD emphasizes an understanding of what is valuable to the customer and then
deploys these values throughout the engineering process.
 QFD identifies three types of requirements:
Normal requirements
 The objectives and goals that are stated for a product or system during meetings with the customer.
 If these requirements are present, the customer is satisfied. Examples of normal requirements might be
requested types of graphical displays, specific system functions, and defined levels of performance.
Expected requirements
 These requirements are implicit to the product or system and may be so fundamental that the customer
does not explicitly state them.
 Their absence will be a cause for significant dissatisfaction.

Exciting requirements
 These features go beyond the customer’s expectations and prove to be very satisfying when present.
 For example, software for a new mobile phone comes with standard features, but is coupled with a set of
unexpected capabilities that delight every user of the product.
Although QFD concepts can be applied across the entire software process, specific QFD techniques are
applicable to the requirements elicitation activity.
QFD uses customer interviews and observation, surveys, and examination of historical data as raw data for
the requirements gathering activity.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 67


Unit-6 –Software Coding & Testing

(6) Testing Conventional Applications

Software Testing Fundamentals


The following characteristics lead to testable software.
Operability.
 “The better it works, the more efficiently it can be tested.”
 If a system is designed and implemented with quality in mind, relatively few bugs will block the execution
of tests, allowing testing to progress without fits and starts.
Observability.
 “What you see is what you test.”
 Inputs provided as part of testing produce distinct outputs.
 System states and variables are visible or queriable during execution. Incorrect output is easily identified.
Internal errors are automatically detected and reported. Source code is accessible.
Controllability.
 “The better we can control the software, the more the testing can be automated and optimized.”
 All possible outputs can be generated through some combination of input, and I/O formats are
consistent and structured.
 All code is executable through some combination of input. Software and hardware states and variables
can be controlled directly by the test engineer.
 Tests can be conveniently specified, automated, and reproduced.
Decomposability.
 “By controlling the scope of testing, we can more quickly isolate problems and perform smarter
retesting.”
 The software system is built from independent modules that can be tested independently.
Simplicity.
 “The less there is to test, the more quickly we can test it.”
 The program should exhibit functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements); structural simplicity (e.g., architecture is modularized to limit the propagation of faults),
and code simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
Stability.
 “The fewer the changes, the fewer the disruptions to testing.”
 Changes to the software are infrequent, controlled when they do occur, and do not invalidate existing
tests.
 The software recovers well from failures.
Understandability.
 “The more information we have, the smarter we will test.”
 The architectural design and the dependencies between internal, external, and shared components are
well understood.
 Technical documentation is instantly accessible, well organized, specific and detailed, and accurate.
Changes to the design are communicated to testers.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 68


Unit-6 –Software Coding & Testing

(7) Testing Object Oriented Applications

Unit Testing in the OO Context


 When object-oriented software is considered, the concept of the unit changes.
 Encapsulation drives the definition of classes and objects.
 This means that each class and each instance of a class (object) packages attributes (data) and the
operations (also known as methods or services) that manipulate these data.
 Rather than testing an individual module, the smallest testable unit is the encapsulated class.
 Because a class can contain a number of different operations and a particular operation may exist as part
of a number of different classes, the meaning of unit testing changes dramatically.
 Unlike unit testing of conventional software, which tends to focus on the algorithmic detail of a module
and the data that flows across the module interface, class testing for OO software is driven by the
operations encapsulated by the class and the state behavior of the class.

Integration Testing in the OO Context


 Because object-oriented software does not have a hierarchical control structure, conventional top-down
and bottom-up integration strategies have little meaning.
 In addition, integrating operations one at a time into a class (the conventional incremental integration
approach) is often impossible because of the “direct and indirect interactions of the components that
make up the class”.
 There are two different strategies for integration testing of OO systems.
 The first, thread-based testing, integrates the set of classes required to respond to one input or event for
the system. Each thread is integrated and tested individually.
 Regression testing is applied to ensure that no side effects occur.
 The second integration approach, use-based testing, begins the construction of the system by testing
those classes (called independent classes) that use very few (if any) of server classes.
 After the independent classes are tested, the next layer of classes, called dependent classes, that use the
independent classes are tested.
 Cluster testing is one step in the integration testing of OO software.
 Here, a cluster of collaborating classes (determined by examining the CRC and object relationship model)
is exercised by designing test cases that attempt to uncover.

Validation Testing in an OO Context


 At the validation or system level, the details of class connections disappear.
 Like conventional validation, the validation of OO software focuses on user-visible actions and user-
recognizable outputs from the system.
 To assist in the derivation of validation tests, the tester should draw upon use cases that are part of the
requirements model.
 The use case provides a scenario that has a high likelihood of uncovered errors in user-interaction
requirements.
 Conventional black-box testing methods can be used to drive validation tests.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 69


Unit-6 –Software Coding & Testing

(8) Testing Web Applications


 WebApp testing is a collection of related activities with a single goal: to uncover errors in WebApp
content, function, usability, navigability, performance, capacity, and security.
 To accomplish this, a testing strategy that encompasses both reviews and executable testing is applied.

Dimensions of Quality
Content is evaluated at both a syntactic and semantic level. At the syntactic level, spelling, punctuation, and
grammar are assessed for text-based documents. At a semantic level, correctness (of information presented),
Consistency (across the entire content object and related objects), and lack of ambiguity are all assessed.
Function is tested to uncover errors that indicate lack of conformance to customer requirements. Each
WebApp function is assessed for correctness, instability, and general conformance to appropriate
implementation standards (e.g., Java or AJAX language standards).
Structure is assessed to ensure that it properly delivers WebApp content and unction, that it is extensible,
and that it can be supported as new content or functionality is added.
Usability is tested to ensure that each category of user is supported by the interface and can learn and apply
all required navigation syntax and semantics.
Navigability is tested to ensure that all navigation syntax and semantics are exercised to uncover any
navigation errors (e.g., dead links, improper links, and erroneous links).
Performance is tested under a variety of operating conditions, configurations, and loading to ensure that
the system is responsive to user interaction and handles extreme loading without unacceptable operational
degradation.
Compatibility is tested by executing the WebApp in a variety of different host configurations on both the
client and server sides. The intent is to find errors that are specific to a unique host configuration.
Interoperability is tested to ensure that the WebApp properly interfaces with other applications and/or
databases.
Security is tested by assessing potential vulnerabilities and attempting to exploit each. Any successful
penetration attempt is deemed a security failure.

Content Testing
 Errors in WebApp content can be as trivial as minor typographical errors or as significant as incorrect
information, improper organization, or violation of intellectual property laws.
 Content testing attempts to uncover these and many other problems before the user encounters them.
 Content testing combines both reviews and the generation of executable test cases.
 Reviews are applied to uncover semantic errors in content.
 Executable testing is used to uncover content errors that can be traced to dynamically derived content
that is driven by data acquired from one or more databases.

User Interface Testing


 Verification and validation of a WebApp user interface occurs at three distinct points.
 During requirements analysis, the interface model is reviewed to ensure that it conforms to stakeholder
requirements and to other elements of the requirements model.
 During design the interface design model is reviewed to ensure that generic quality criteria established
for all user interfaces have been achieved and that application-specific interface design issues have been
properly addressed.
 During testing, the focus shifts to the execution of application-specific aspects of user interaction as they
are manifested by interface syntax and semantics.
 In addition, testing provides a final assessment of usability.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 70


Unit-6 –Software Coding & Testing

Component-Level Testing
 Component-level testing, also called function testing, focuses on a set of tests that attempt to uncover
errors in WebApp functions.
 Each WebApp function is a software component (implemented in one of a variety of programming or
scripting languages) and can be tested using black-box (and in some cases, white-box) techniques.
 Component-level test cases are often driven by forms-level input. Once forms data are defined, the user
selects a button or other control mechanism to initiate execution.

Navigation Testing
 The job of navigation testing is to ensure that the mechanisms that allow the WebApp user to travel
through the WebApp are all functional and to validate that each navigation semantic unit (NSU) can be
achieved by the appropriate user category.
 Navigation mechanisms should be tested are Navigation links, Redirects, Bookmarks, Frames and
framesets, Site maps, Internal search engines.

Configuration Testing
 Configuration variability and instability are important factors that make WebApp testing a challenge.
Hardware, operating system(s), browsers, storage capacity, network communication speeds, and a
variety of other client-side factors are difficult to predict for each user.
 One user’s impression of the WebApp and the manner in which she interacts with it can differ
significantly from another user’s experience, if both users are not working within the same client-side
configuration.
 The job of configuration testing is not to exercise every possible client-side configuration.
 Rather, it is to test a set of probable client-side and server-side configurations to ensure that the user
experience will be the same on all of them and to isolate errors that may be specific to a particular
configuration.

Security Testing
 Security tests are designed to probe vulnerabilities of the client-side environment, the network
communications that occur as data are passed from client to server and back again, and the server-side
environment.
 Each of these domains can be attacked, and it is the job of the security tester to uncover weaknesses that
can be exploited by those with the intent to do so.

Performance Testing
 Performance testing is used to uncover performance problems that can result from lack of server-side
resources, inappropriate network bandwidth, inadequate database capabilities, faulty or weak operating
system capabilities, poorly designed WebApp functionality, and other hardware or software issues that
can lead to degraded client-server performance.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 71


Unit-6 –Software Coding & Testing

(9) Verification and Validation


Verification Validation
1. Verification is a static practice of verifying 1. Validation is a dynamic mechanism of validating
documents, design, code and program. and testing the actual product.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human based checking of documents and 3. It is computer based execution of program.
files.
4. Verification uses methods like inspections, 4. Validation uses methods like black box
reviews, walkthroughs, and Desk-checking etc. (functional) testing, gray box testing, and white
box (structural) testing etc.
5. Verification is to check whether the software 5. Validation is to check whether software meets
conforms to specifications. the customer expectations and requirements.
6. It can catch errors that validation cannot catch. 6. It can catch errors that verification cannot catch.
It is low level exercise. It is High Level Exercise.
7. Target is requirements specification, application 7. Target is actual product-a unit, a module, a bent
and software architecture, high level, complete of integrated modules, and effective final
design, and database design etc. product.
8. Verification is done by QA team to ensure that 8. Validation is carried out with the involvement of
the software is as per the specifications in the testing team.
SRS document.
9. It generally comes first-done before validation. 9. It generally follows after verification.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 72


Unit-7 –Software Coding & Testing

(1) Software Quality Assurance (SQA)


 Software quality assurance (often called quality management) is an umbrella activity that is applied
throughout the software process.
 It is planned and systematic pattern of activities necessary to provide high degree of confidence in the
quality of a product.
 Software quality assurance (SQA) encompasses
̶ An SQA process.
̶ Specific quality assurance and quality control tasks.
̶ Effective software engineering practice.
̶ Control of all software work products and the changes made to them.
̶ A procedure to ensure compliance with software development standards.
̶ Measurement and reporting mechanisms.

Importance of SQA
 Quality control and assurance are essential activities for any business that produces products to be used
by others.
 Prior to the twentieth century, quality control was the sole responsibility of the craftsperson who built a
product.
 As time passed and mass production techniques became commonplace, quality control became an
activity performed by people other than the ones who built the product.
 Software quality is one of the pivotal aspects of a software development company.
 Software quality assurance starts from the beginning of a project, right from the analysis phase.
 SQA checks the adherence to software product standards, processes, and procedures.
 SQA includes the systematic process of assuring that standards and procedures are established and are
followed throughout the software development life cycle and test cycle as well.
 The compliance of the built with agreed-upon standards and procedures is evaluated through process
monitoring, product evaluation, project management etc.
 The major reason of involving software quality assurance in the process of software product
development is to make sure that the final product built is as per the requirement specification and
comply with the standards.

SQA Activities
Prepare an SQA plan for a project
 The plan is developed as part of project planning and is reviewed by all stakeholders.
 Quality assurance actions performed by the software engineering team and the SQA group are governed
by the plan.
 The plan identifies evaluations to be performed, audits and reviews to be conducted, standards that are
applicable to the project, procedures for error reporting and tracking, work products that are produced
by the SQA group, and feedback that will be provided to the software team.
Participate in the development of the project’s software process description
 The software team selects a process for the work to be performed.
 The SQA group reviews the process description for compliance with organizational policy, internal
software standards, externally imposed standards, and other parts of the software project plan.
Review software engineering activities to verify compliance with the defined software process.
 The SQA group identifies, documents, and tracks deviations from the process and verifies that
corrections have been made.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 73


Unit-7 –Software Coding & Testing

Audit designated software work products to verify compliance with those defined as part of the
software process
 The SQA group reviews selected work products; identifies, documents, and tracks deviations; verifies that
corrections have been made; and periodically reports the results of its work to the project manager.
Ensure that deviations in software work and work products are documented and handled
according to a documented procedure.
 Deviations may be encountered in the project plan, process description, applicable standards, or
software engineering work products.
Records any noncompliance and reports to senior management
 Noncompliance items are tracked until they are resolved.

SQA Techniques
Data Collection
Statistical quality assurance implies the following steps:
1. Information about software defects is collected and categorized.
2. An attempt is made to trace each defect to its underlying cause (e.g., non- conformance to specifications,
design error, violation of standards, poor communication with the customer).
3. Using the Pareto principle (80 percent of the defects can be traced to 20 per- cent of all possible causes),
isolate the 20 percent (the "vital few").
4. Once the vital few causes have been identified, move to correct the problems that have caused the
defects.
 A software engineering organization collects information on defects for a period of one year.
 Some of the defects are uncovered as software is being developed.
 Others are encountered after the software has been released to its end-users. Although hundreds of
different errors are uncovered, all can be tracked to one (or more) of the following causes:
̶ incomplete or erroneous specifications (IES)
̶ misinterpretation of customer communication (MCC)
̶ intentional deviation from specifications (IDS)
̶ violation of programming standards (VPS)
̶ error in data representation (EDR)
̶ inconsistent component interface (ICI)
̶ error in design logic (EDL)
̶ incomplete or erroneous testing (IET)
̶ inaccurate or incomplete documentation (IID)
̶ error in programming language translation of design (PLT)
̶ ambiguous or inconsistent human/computer interface (HCI)
̶ miscellaneous (MIS)
Six Sigma: Refer Section 4 SQA standards

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 74


Unit-7 –Software Coding & Testing

(2) Software Reviews (Formal Technical Reviews)


A formal technical review (FTR) is a software quality control activity performed by software engineers (and
others).
The objectives of an FTR are:
(1) To uncover errors in function, logic, or implementation for any representation of the software.
(2) To verify that the software under review meets its requirements.
(3) To ensure that the software has been represented according to predefined standards.
(4) To achieve software that is developed in a uniform manner.
(5) To make projects more manageable.
Review Reporting and Record Keeping
 During the FTR, a reviewer (the recorder) actively records all issues that have been raised.
 These are summarized at the end of the review meeting, and a review issues list is produced. In addition,
a formal technical review summary report is completed.
Review Guidelines
 Guidelines for conducting formal technical reviews must be established in advance, distributed to all
reviewers, agreed upon, and then followed.
 A review that is un-controlled can often be worse than no review at all.
 Review the product, not the producer.
 Set an agenda and maintain it.
 Limit debate and denial:
 Speak problem areas, but don't attempt to solve every problem noted.
 Take written notes.
 Limit the number of participants and insist upon advance preparation.
 Develop a checklist for each product that is likely to be reviewed.
 Allocate resources and schedule time for FTRs
 Conduct meaningful training for all reviewers.
 Review your early reviews.
Sample-Driven Reviews
 In an ideal setting, every software engineering work product would undergo a formal technical review.
 In the real word of software projects, resources are limited and time is short.
 As a consequence, reviews are often skipped, even though their value as a quality control mechanism is
recognized.

(3) Software Reliability


 Software reliability is defined in statistical terms as “the probability of failure-free operation of a
computer program in a specified environment for a specified time”.

Measures of Reliability
 A simple measure of reliability is meantime-between-failure (MTBF):
 MTBF = MTTF + MTTR
 Where the acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to- repair, respectively.
 Many researchers argue that MTBF is a far more useful measure than other quality-related software
metrics. An end user is concerned with failures, not with the total defect count.
 Because each defect contained within a program does not have the same failure rate, the total defect
count provides little indication of the reliability of a system.
 An alternative measure of reliability is failures-in-time (FIT) a statistical measure of how many failures a
component will have over one billion hours of operation.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 75


Unit-7 –Software Coding & Testing

Software Safety
 Software safety is a software quality assurance activity that focuses on the identification and assessment
of potential hazards that may affect software negatively and cause an entire system to fail.
 If hazards can be identified early in the software process, software design features can be specified that
will either eliminate or control potential hazards.
 A modeling and analysis process is conducted as part of software safety.
 Initially, hazards are identified and categorized by criticality and risk.
 Although software reliability and software safety are closely related to one another, it is important to
understand the subtle difference between them.
 Software reliability uses statistical analysis to determine the likelihood that a software failure will occur.
 However, the occurrence of a failure does not necessarily result in a hazard or accident.
 Software safety examines the ways in which failures result in conditions that can lead to an accident.

(4) The quality standards ISO 9000 and 9001, Six Sigma, CMM

ISO 9001
 In order to bring quality in product & service, many organizations are adopting Quality Assurance System.
 ISO standards are issued by the International Organization for Standardization (ISO) in Switzerland.
 Proper documentation is an important part of an ISO 9001 Quality Management System.
 ISO 9001 is the quality assurance standard that applies to software engineering.
 It includes, requirements that must be present for an effective quality assurance system.
 ISO 9001 standard is applicable to all engineering discipline.
 The requirements delineated by ISO 9001:2000 address topics such as
̶ Management responsibility
̶ quality system
̶ contract review
̶ design control
̶ document
̶ data control
̶ product identification
̶ Traceability
̶ process control
̶ inspection
̶ Testing
̶ preventive action
̶ control of quality records
̶ internal quality
̶ Audits
̶ Training
̶ Servicing
 Statistical techniques.
 In order for a software organization to become registered to ISO 9001:2000, it must establish policies and
procedures to address each of the requirements just noted (and others) and then be able to demonstrate
that these policies and procedures are being followed.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 76


Unit-7 –Software Coding & Testing

Six Sigma
 Six sigma is “A generic quantitative approach to improvement that applies to any process.”
 “Six Sigma is a disciplined, data-driven approach and methodology for eliminating in any process from
manufacturing to transactional and from product to service.”
 To achieve six sigma a process must not produce more than 3.4 defects per million opportunities.
 5 Sigma -> 230 defects per million
 4 Sigma -> 6210 defects per million
 Six sigma have two methodologies

(1) DMAIC (Define, Measure, Analyze, Improve, Control)


̶ Define: Define the problem or process to improve upon related to the customer and goals
̶ Measure: How can you measure this process in a systematic way?
̶ Analyze: Analyze the process or problem and identify the way in which it can be improved.
What are the root causes of problems within the process?
̶ Improve: Once you know the causes of the problems, present solutions for them and implement
them
̶ Control: Utilize Statistical Process Control to continuously measure your results and ensure you
are improving
̶ Several Software Packages available to assist in measuring yield, defects per million
opportunities, etc.

(2) DMADV: (Define, Measure, Analyze, Design, Verify)


̶ Define, Measure and analyze are similar to above method.
̶ Design: Avoid root causes of defects and meet the customer requirements.
̶ Verify: To verify the process, compare the process with the standard plan and find differences.

CMM (Capability Maturity Model)


 To determine an organization’s current state of process maturity, the SEI uses an assessment that results
in a five point grading scheme.
 The grading scheme determines compliance with a capability maturity model (CMM) that defines key
activities required at different levels of process maturity.
 The SEI approach provides a measure of the global effectiveness of a company's software engineering
practices and establishes five process maturity levels that are defined in the following manner:
 Level 1: Initial. The software process is characterized as ad hoc and occasionally even chaotic. Few
processes are defined, and success depends on individual effort.
 Level 2: Repeatable. Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on projects with
similar applications.
 Level 3: Defined. The software process for both management and engineering activities is documented,
standardized, and integrated into an organization wide software process. All projects use a documented
and approved version of the organization's process for developing and supporting software. This level
includes all characteristics defined for level 2.
 Level 4: Managed. Detailed measures of the software process and product quality are collected. Both the
software process and products are quantitatively understood and controlled using detailed measures.
This level includes all characteristics defined for level 3.
 Level 5: Optimizing. Continuous process improvement is enabled by quantitative feedback from the
process and from testing innovative ideas and technologies. This level includes all characteristics defined
for level 4.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 77


Unit-7 –Software Coding & Testing

(5) SQA Plan


The SQA Plan provides a road map for instituting software quality assurance.
Developed by the SQA group (or by the software team if an SQA group does not exist), the plan serves as a
template for SQA activities that are instituted for each software project.
The standard recommends a structure that identifies:
1. The purpose and scope of the plan.
2. A description of all software engineering work products (e.g., models, documents, source code) that fall
within the purview of SQA.
3. All applicable standards and practices that are applied during the software process.
4. SQA actions and tasks (including reviews and audits) and their placement throughout the software
process.
5. The tools and methods that support SQA actions and tasks.
6. Software configuration management procedures.
7. Methods for assembling, safeguarding, and maintaining all SQA-related records.
8. Organizational roles and responsibilities relative to product quality.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 78


Unit-8 –Software Maintenance and
Configuration Management

(1) Types of Software Maintenance


 In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance tasks as some bug discovered by some user or it may be a large event in itself based on
maintenance size or nature. Following are some types of maintenance based on their characteristics:
Corrective Maintenance:
 This includes modifications and updations done in order to correct or fix problems, which are either
discovered by user or concluded by user error reports.
 Corrective maintenance deals with the repair of faults or defects found in day-today system functions.
 A defect can result due to errors in software design, logic and coding.
 Design errors occur when changes made to the software are incorrect, incomplete, wrongly
communicated, or the change request is misunderstood.
Adaptive Maintenance:
 This includes modifications and updations applied to keep the software product up-to date and tuned to
the ever changing world of technology and business environment.
 Adaptive maintenance is the implementation of changes in a part of the system, which has been affected
by a change that occurred in some other part of the system.
 Adaptive maintenance consists of adapting software to changes in the environment such as the
hardware or the operating system.
 The term environment in this context refers to the conditions and the influences which act (from outside)
on the system.
Perfective Maintenance:
 This includes modifications and updates done in order to keep the software usable over long period of
time. It includes new features, new user requirements for refining the software and improve its reliability
and performance.
 Perfective maintenance mainly deals with implementing new or changed user requirements.
 Perfective maintenance involves making functional enhancements to the system in addition to the
activities to increase the system's performance even when the changes have not been suggested by
faults.
 This includes enhancing both the function and efficiency of the code and changing the functionalities of
the system as per the users' changing needs.
Preventive Maintenance:
 This includes modifications and updations to prevent future problems of the software. It aims to attend
problems, which are not significant at this moment but may cause serious issues in future.
 Preventive maintenance involves performing activities to prevent the occurrence of errors.
 It tends to reduce the software complexity thereby improving program understandability and increasing
software maintainability. It comprises documentation updating, code optimization, and code
restructuring.
 Documentation updating involves modifying the documents affected by the changes in order to
correspond to the present state of the system.

(2) Re-Engineering
 When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering.
 It is a thorough process where the design of software is changed and programs are re-written.
 Legacy software cannot keep tuning with the latest technology available in the market.
 As the hardware become obsolete, updating of software becomes a headache.
 Even if software grows old with time, its functionality does not.
 For example, initially UNIX was developed in assembly language. When language C came into existence,
UNIX was re-engineered in C, because working in assembly language was difficult.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 79


Unit-8 –Software Maintenance and
Configuration Management

 Other than this, sometimes programmers notice that few parts of software need more maintenance than
others and they also need re-engineering.
Re-Engineering Process
 Decide what to re-engineer. Is it whole software or a part of it?
 Perform Reverse Engineering, in order to obtain specifications of existing software.
 Restructure Program if required. For example, changing function-oriented programs into object-oriented
programs. Re-structure data as required.
 Apply Forward engineering concepts in order to get re-engineered software.

(3) Reverse Engineering


 Reverse engineering can extract design information from source code, but the abstraction level, the
completeness of the documentation, the degree to which tools and a human analyst work together, and
the
 Directionality of the process are highly variable.
 The abstraction level of a reverse engineering process and the tools used to effect it refers to the
sophistication of the design information that can be extracted from source code.
 Ideally, the abstraction level should be as high as possible.
 That is, the reverse engineering process should be capable of deriving procedural design representations
(a low-level abstraction), program and data structure information (a somewhat higher level of
abstraction), object models, data and/or control flow models (a relatively high level of abstraction), and
entity relationship models (a high level of abstraction).
 As the abstraction level increases, you are provided with information that will allow easier understanding
of the program.
 The completeness of a reverse engineering process refers to the level of detail that is provided at an
abstraction level. In most cases, the completeness decreases as the abstraction level increases.
 Interactivity refers to the degree to which the human is “integrated” with automated tools to create an
effective reverse engineering process.
 In most cases, as the abstraction level increases, interactivity must increase or completeness will suffer.
 The directionality of the reverse engineering process is one-way, all information extracted from the
source code is provided to the software engineer who can then use it during any maintenance activity.

(4) Forward Engineering


 Forward engineering is a process of obtaining desired software from the specifications in hand which
were brought down by means of reverse engineering. It assumes that there was some software
engineering already done in the past.
 Forward engineering is same as software engineering process with only one difference it is carried out
always after reverse engineering.
 The forward engineering process applies software engineering principles, concepts, and methods to re-
create an existing application. In most cases, forward engineering does not simply create a modern
equivalent of an older program.
 Rather, new user and technology requirements are integrated into the reengineering effort.
 The redeveloped program extends the capabilities of the older application.

(5) The SCM (Software Configuration Management) Process


 The software configuration management process defines a series of tasks that have four primary
objectives:
1. To identify all items that collectively define the software configuration.
2. To manage changes to one or more of these items.
3. To facilitate the construction of different versions of an application.
4. To ensure that software quality is maintained as the configuration evolves over time.
 Referring to the figure, SCM tasks can viewed as concentric layers.
Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 80
Unit-8 –Software Maintenance and
Configuration Management

 SCIs (Software Configuration Item) flow outward through these layers throughout their useful life,
ultimately becoming part of the software configuration of one or more versions of an application or
system.
 As an SCI moves through a layer, the actions implied by each SCM task may or may not be applicable. For
example, when a new SCI is created, it must be identified.
 However, if no changes are requested for the SCI, the change control layer does not apply. The SCI is
assigned to a specific version of the software (version control mechanisms come into play).
 A record of the SCI (its name, creation date, version designation, etc.) is maintained for configuration
auditing purposes and reported to those with a need to know.
 In the sections that follow, we examine each of these SCM process layers in more detail.

Figure: Layers of SCM Process

(6) Identification of Objects in the Software Configuration


 To control and manage software configuration items, each should be separately named and then
organized using an object-oriented approach.
 Two types of objects can be identified: basic objects and aggregate objects.
 A basic object is a unit of information that you create during analysis, design, code, or test.
 For example, a basic object might be a section of a requirements specification, part of a design model,
source code for a component, or a suite of test cases that are used to exercise the code.
 An aggregate object is a collection of basic objects and other aggregate objects.
 For example, a Design Specification is an aggregate object. Conceptually, it can be viewed as a named
(identified) list of pointers that specify aggregate objects such as Architectural Model and Data Model,
and basic objects such as Component and UML Class Diagram.
 Each object has a set of distinct features that identify it uniquely: a name, a description, a list of
resources, and a “realization.”
 The object name is a character string that identifies the object unambiguously.
 The object description is a list of data items that identify the SCI type (e.g., model element, program,
data) represented by the object, a project identifier, and change and/or version information.
 Resources are “entities that are provided, processed, referenced or otherwise required by the object”.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 81


Unit-8 –Software Maintenance and
Configuration Management

(7) Version Control and Change Control

Version Control
 Version control combines procedures and tools to manage different versions of configuration objects
that are created during the software process.
 A version control system implements or is directly integrated with four major capabilities:
1. A project database (repository) that stores all relevant configuration objects.
2. A version management capability that stores all versions of a configuration object.
3. A make facility that enables you to collect all relevant configuration objects and construct a specific
version of the software.
 In addition, version control and change control systems often implement an issues tracking capability
that enables the team to record and track the status of all outstanding issues associated with each
configuration object.
 A number of version control systems establish a change set—a collection of all changes (to some baseline
configuration) that are required to create a specific version of the software.
 A change set “captures all changes to all files in the configuration along with the reason for changes and
details of who made the changes and when.”
 A number of named change sets can be identified for an application or system.
 This enables you to construct a version of the software by specifying the change sets (by name) that must
be applied to the baseline configuration.
 To accomplish this, a system modeling approach is applied. The system model contains:
1. A template that includes a component hierarchy and a “build order” for the components that
describes how the system must be constructed.
2. Construction rules.
3. Verification rules.
 The primary difference in approaches is the sophistication of the attributes that are used to construct
specific versions and variants of a system and the mechanics of the process for construction.

Change Control
 For a large software project, uncontrolled change rapidly leads to chaos.
 For such projects, change control combines human procedures and automated tools to provide a
mechanism for the control of change.
 A change request is submitted and evaluated to assess technical merit, potential side effects, overall
impact on other configuration objects and system functions, and the projected cost of the change.
 The results of the evaluation are presented as a change report, which is used by a change control
authority
 (CCA)—a person or group that makes a final decision on the status and priority of the change.
 An engineering change order (ECO) is generated for each approved change.
 The ECO describes the change to be made, the constraints that must be respected, and the criteria for
review and audit.

Prof. Rupesh G. Vaishnav, CE Department | 2160701 – Software Engineering 82

You might also like