SE Unit 3,4,5
SE Unit 3,4,5
SE Unit 3,4,5
Design principles
1. The design process should not suffer from “tunnel vision.”
2. The design should be traceable to the analysis model.
3. The design should not reinvent the wheel.
4. The design should “minimize the intellectual distance” between the software and the problem as it exists
in the real world.
5. The design should exhibit uniformity and integration.
6. The design should be structured to accommodate change.
7. The design should be structured to degrade gently, even when abnormal data, events, or operating
conditions are encountered.
8. Design is not coding, coding is not design.
9. The design should be assessed for quality as it is being created, not after the fact.
10. The design should be reviewed to minimize conceptual (semantic) errors.
Architecture “constitutes a relatively small, intellectually graspable model of how the system is
structured and how its components work together”
Architectural Styles
The software that is built for computer-based systems also exhibits one of many architectural styles.
Each style describes a system category that encompasses
1 A set of components (e.g., a database, computational modules) that perform a function required by a
system.
2 A set of connectors that enable “communication, coordinations and cooperation” among
components.
3 Constraints that define how components can be integrated to form the system.
4 Semantic models that enable a designer to understand the overall properties of a system by
analyzing the known properties of its constituent parts.
Data-centered architecture style
A data store (e.g., a file or database) resides at the center of this architecture and is accessed frequently
by other components that update, add, delete, or otherwise modify data within the store.
Client software accesses a central repository.
In some cases the data repository is passive.
That is, client software accesses the data independent of any changes to the data or the actions of other
client software.
Classification Cohesion
Coincidental cohesion
• A module is said to have coincidental cohesion, if it performs a set of tasks that relate to each other very
loosely, if at all.
• In this case, the module contains a random collection of functions. It is likely that the functions have
been put in the module out of pure coincidence without any thought or design.
• For example, in a transaction processing system (TPS), the get-input, print-error, and summarize-
members
• functions are grouped into one module.
Logical cohesion
• A module is said to be logically cohesive, if all elements of the module perform similar operations, e.g.
error handling, data input, data output, etc.
• An example of logical cohesion is the case where a set of print functions generating different output
reports are arranged into a single module.
Temporal cohesion
• When a module contains functions that are related by the fact that all the functions must be executed in
the same time span, the module is said to exhibit temporal cohesion.
• The set of functions responsible for initialization, start-up, shutdown of some process, etc. exhibit
temporal cohesion.
Procedural cohesion
• A module is said to possess procedural cohesion, if the set of functions of the module are all part of a
procedure (algorithm) in which certain sequence of steps have to be carried out for achieving an
objective, e.g. the algorithm for decoding a message.
Communicational cohesion
• A module is said to have communicational cohesion, if all functions of the module refer to or update the
same data structure, e.g. the set of functions defined on an array or a stack.
Sequential cohesion
• A module is said to possess sequential cohesion, if the elements of a module form the parts of sequence,
where the output from one element of the sequence is input to the next.
• For example, in a TPS, the get-input, validate-input, sort-input functions are grouped into one module.
Functional cohesion
• Functional cohesion is said to exist, if different elements of a module cooperate to achieve a single
function. For example, a module containing all the functions required to manage employees’ pay-roll
exhibits functional cohesion.
• Suppose a module exhibits functional cohesion and we are asked to describe what the module does,
then we would be able to describe it using a single sentence.
Classification of Coupling
Data coupling
• Two modules are data coupled, if they communicate through a parameter. An example is an elementary
data item passed as a parameter between two modules, e.g. an integer, a float, a character, etc.
• This data item should be problem related and not used for the control purpose.
Stamp coupling
• Two modules are stamp coupled, if they communicate using a composite data item such as a record in
PASCAL or a structure in C.
Control coupling
• Control coupling exists between two modules, if data from one module is used to direct the order of
instructions execution in another.
• An example of control coupling is a flag set in one module and tested in another module.
Common coupling
• Two modules are common coupled, if they share data through some global data items.
Content coupling
• Content coupling exists between two modules, if they share code, e.g. a branch from one module into
another module.
Aesthetic Design
Aesthetic design, also called graphic design, is an artistic endeavor that complements the technical
aspects of WebApp design.
Without it, a WebApp may be functional, but unappealing. With it, a WebApp draws its users into a world
that embraces them on a primitive, as well as an intellectual level.
Content Design
Content design focuses on two different design tasks, each addressed by individuals with different skill
sets.
First, a design representation for content objects and the mechanisms required to establish their
relationship to one another is developed.
In addition, the information within a specific content object is created.
The latter task may be conducted by copywriters, graphic designers, and others who generate the
content to be used within a WebApp.
Architecture Design
Architecture design is tied to the goals established for a WebApp, the content to be presented, the users
who will visit, and the navigation philosophy that has been established.
In most cases, architecture design is conducted in parallel with interface design, aesthetic design, and
content design.
Because the WebApp architecture may have a strong influence on navigation, the decisions made during
this design action will influence work conducted during navigation design.
Navigation Design
Once the WebApp architecture has been established and the components (pages, scripts, applets, and other
processing functions) of the architecture have been identified, you must define navigation pathways that
enable users to access WebApp content and functions.
Component-Level Design
Modern WebApps deliver increasingly sophisticated processing functions that
1. Perform localized processing to generate content and navigation capability in a dynamic fashion,
2. Provide computation or data processing capability that are appropriate for the WebApp’s business
domain.
3. Provide sophisticated database query and access.
4. Establish data interfaces with external corporate systems.
Coding
Good software development organizations normally require their programmers to adhere to some well-
defined and standard style of coding called coding standards.
Most software development organizations formulate their own coding standards that suit them most,
and require their engineers to follow these standards rigorously.
The purpose of requiring all engineers of an organization to adhere to a standard style of coding is the
following:
̶ A coding standard gives a uniform appearance to the codes written by different engineers.
̶ It enhances code understanding.
̶ It encourages good programming practices.
A coding standard lists several rules to be followed during coding, such as the way variables are to be
named, the way the code is to be laid out, error return conventions, etc.
Code Review
Code review for a model is carried out after the module is successfully compiled and the all the syntax
errors have been eliminated.
Code reviews are extremely cost-effective strategies for reduction in coding errors and to produce high
quality code. Normally, two types of reviews are carried out on the code of a module.
There are two types’ code review techniques are code inspection and code walk through.
Code Inspection
In contrast to code walk through, the aim of code inspection is to discover some common types of errors
caused due to oversight and improper programming.
In other words, during code inspection the code is examined for the presence of certain kinds of errors,
in contrast to the hand simulation of code execution done in code walk through.
For instance, consider the classical error of writing a procedure that modifies a formal parameter while
the calling routine calls that procedure with a constant actual parameter.
It is more likely that such an error will be discovered by looking for these kinds of mistakes in the code,
rather than by simply hand simulating execution of the procedure.
In addition to the commonly made errors, adherence to coding standards is also checked during code
inspection.
Good software development companies collect statistics regarding different types of errors commonly
committed by their engineers and identify the type of errors most frequently committed.
Software Documentation
When various kinds of software products are developed then not only the executable files and the source
code are developed but also various kinds of documents such as users’ manual, software requirements
specification (SRS) documents, design documents, test documents, installation manual, etc. are also
developed as part of any software engineering process.
All these documents are a vital part of good software development practice.
Different types of software documents can broadly be classified into the following:
̶ Internal documentation
̶ External documentation
Internal documentation is the code comprehension features provided as part of the source code itself.
Internal documentation is provided through appropriate module headers and comments embedded in
the source code.
Internal documentation is also provided through the useful variable names, module and function
headers, code indentation, code structuring, use of enumerated types and constant identifiers, use of
user-defined data types, etc.
This is of course in contrast to the common expectation that code commenting would be the most useful.
The research finding is obviously true when comments are written without thought.
But even when code is carefully commented, meaningful variable names still are more helpful in
understanding a piece of code. Good software development organizations usually ensure good internal
documentation by appropriately formulating their coding standards and coding guidelines.
External documentation is provided through various types of supporting documents such as users’
manual, software requirements specification document, design document, test documents, etc.
A systematic software development style ensures that all these documents are produced in an orderly
fashion.
Testing Strategies
Initially, system engineering defines the role of software and leads to software requirements analysis,
where the information domain, function, behavior, performance, constraints, and validation criteria for
software are established.
Moving inward along the spiral, you come to design and finally to coding. To develop computer software,
you spiral inward (counterclockwise) along streamlines that decrease the level of abstraction on each
turn.
Unit Testing:
Unit is the smallest part of a software system which is testable it may include code files, classes and
methods which can be tested individual for correctness.
Unit is a process of validating such small building block of a complex system, much before testing an
integrated large module or the system as a whole.
Driver and/or stub software must be developed for each unit test a driver is nothing more than a "main
program" that accepts test case data, passes such data to the component, and prints relevant results.
Stubs serve to replace modules that are subordinate (called by) the component to be tested.
A stub or "dummy subprogram" uses the subordinate module's interface.
Integration Testing:
Integration is defined as a set of integration among component.
Testing the interactions between the module and interactions with other system externally is called
Integration Testing.
Testing of integrated modules to verify combined functionality after integration.
Integration testing addresses the issues associated with the dual problems of verification and program
construction.
Modules are typically code modules, individual applications, client and server applications on a network,
etc. This type of testing is especially relevant to client/server and distributed systems.
Types of integration testing are:
̶ Top-down integration
̶ Bottom-up integration
̶ Regression testing
̶ Smoke testing
Validation Testing
The process of evaluating software during the development process or at the end of the development
process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined as to
demonstrate that the product fulfills its intended use when deployed on appropriate environment.
Validation testing provides final assurance that software meets all informational, functional, behavioral,
and performance requirements.
The alpha test is conducted at the developer’s site by a representative group of end users.
The software is used in a natural setting with the developer “looking over the shoulder” of the users and
recording errors and usage problems.
Alpha tests are conducted in a controlled environment.
Exciting requirements
These features go beyond the customer’s expectations and prove to be very satisfying when present.
For example, software for a new mobile phone comes with standard features, but is coupled with a set of
unexpected capabilities that delight every user of the product.
Although QFD concepts can be applied across the entire software process, specific QFD techniques are
applicable to the requirements elicitation activity.
QFD uses customer interviews and observation, surveys, and examination of historical data as raw data for
the requirements gathering activity.
Dimensions of Quality
Content is evaluated at both a syntactic and semantic level. At the syntactic level, spelling, punctuation, and
grammar are assessed for text-based documents. At a semantic level, correctness (of information presented),
Consistency (across the entire content object and related objects), and lack of ambiguity are all assessed.
Function is tested to uncover errors that indicate lack of conformance to customer requirements. Each
WebApp function is assessed for correctness, instability, and general conformance to appropriate
implementation standards (e.g., Java or AJAX language standards).
Structure is assessed to ensure that it properly delivers WebApp content and unction, that it is extensible,
and that it can be supported as new content or functionality is added.
Usability is tested to ensure that each category of user is supported by the interface and can learn and apply
all required navigation syntax and semantics.
Navigability is tested to ensure that all navigation syntax and semantics are exercised to uncover any
navigation errors (e.g., dead links, improper links, and erroneous links).
Performance is tested under a variety of operating conditions, configurations, and loading to ensure that
the system is responsive to user interaction and handles extreme loading without unacceptable operational
degradation.
Compatibility is tested by executing the WebApp in a variety of different host configurations on both the
client and server sides. The intent is to find errors that are specific to a unique host configuration.
Interoperability is tested to ensure that the WebApp properly interfaces with other applications and/or
databases.
Security is tested by assessing potential vulnerabilities and attempting to exploit each. Any successful
penetration attempt is deemed a security failure.
Content Testing
Errors in WebApp content can be as trivial as minor typographical errors or as significant as incorrect
information, improper organization, or violation of intellectual property laws.
Content testing attempts to uncover these and many other problems before the user encounters them.
Content testing combines both reviews and the generation of executable test cases.
Reviews are applied to uncover semantic errors in content.
Executable testing is used to uncover content errors that can be traced to dynamically derived content
that is driven by data acquired from one or more databases.
Component-Level Testing
Component-level testing, also called function testing, focuses on a set of tests that attempt to uncover
errors in WebApp functions.
Each WebApp function is a software component (implemented in one of a variety of programming or
scripting languages) and can be tested using black-box (and in some cases, white-box) techniques.
Component-level test cases are often driven by forms-level input. Once forms data are defined, the user
selects a button or other control mechanism to initiate execution.
Navigation Testing
The job of navigation testing is to ensure that the mechanisms that allow the WebApp user to travel
through the WebApp are all functional and to validate that each navigation semantic unit (NSU) can be
achieved by the appropriate user category.
Navigation mechanisms should be tested are Navigation links, Redirects, Bookmarks, Frames and
framesets, Site maps, Internal search engines.
Configuration Testing
Configuration variability and instability are important factors that make WebApp testing a challenge.
Hardware, operating system(s), browsers, storage capacity, network communication speeds, and a
variety of other client-side factors are difficult to predict for each user.
One user’s impression of the WebApp and the manner in which she interacts with it can differ
significantly from another user’s experience, if both users are not working within the same client-side
configuration.
The job of configuration testing is not to exercise every possible client-side configuration.
Rather, it is to test a set of probable client-side and server-side configurations to ensure that the user
experience will be the same on all of them and to isolate errors that may be specific to a particular
configuration.
Security Testing
Security tests are designed to probe vulnerabilities of the client-side environment, the network
communications that occur as data are passed from client to server and back again, and the server-side
environment.
Each of these domains can be attacked, and it is the job of the security tester to uncover weaknesses that
can be exploited by those with the intent to do so.
Performance Testing
Performance testing is used to uncover performance problems that can result from lack of server-side
resources, inappropriate network bandwidth, inadequate database capabilities, faulty or weak operating
system capabilities, poorly designed WebApp functionality, and other hardware or software issues that
can lead to degraded client-server performance.
Importance of SQA
Quality control and assurance are essential activities for any business that produces products to be used
by others.
Prior to the twentieth century, quality control was the sole responsibility of the craftsperson who built a
product.
As time passed and mass production techniques became commonplace, quality control became an
activity performed by people other than the ones who built the product.
Software quality is one of the pivotal aspects of a software development company.
Software quality assurance starts from the beginning of a project, right from the analysis phase.
SQA checks the adherence to software product standards, processes, and procedures.
SQA includes the systematic process of assuring that standards and procedures are established and are
followed throughout the software development life cycle and test cycle as well.
The compliance of the built with agreed-upon standards and procedures is evaluated through process
monitoring, product evaluation, project management etc.
The major reason of involving software quality assurance in the process of software product
development is to make sure that the final product built is as per the requirement specification and
comply with the standards.
SQA Activities
Prepare an SQA plan for a project
The plan is developed as part of project planning and is reviewed by all stakeholders.
Quality assurance actions performed by the software engineering team and the SQA group are governed
by the plan.
The plan identifies evaluations to be performed, audits and reviews to be conducted, standards that are
applicable to the project, procedures for error reporting and tracking, work products that are produced
by the SQA group, and feedback that will be provided to the software team.
Participate in the development of the project’s software process description
The software team selects a process for the work to be performed.
The SQA group reviews the process description for compliance with organizational policy, internal
software standards, externally imposed standards, and other parts of the software project plan.
Review software engineering activities to verify compliance with the defined software process.
The SQA group identifies, documents, and tracks deviations from the process and verifies that
corrections have been made.
Audit designated software work products to verify compliance with those defined as part of the
software process
The SQA group reviews selected work products; identifies, documents, and tracks deviations; verifies that
corrections have been made; and periodically reports the results of its work to the project manager.
Ensure that deviations in software work and work products are documented and handled
according to a documented procedure.
Deviations may be encountered in the project plan, process description, applicable standards, or
software engineering work products.
Records any noncompliance and reports to senior management
Noncompliance items are tracked until they are resolved.
SQA Techniques
Data Collection
Statistical quality assurance implies the following steps:
1. Information about software defects is collected and categorized.
2. An attempt is made to trace each defect to its underlying cause (e.g., non- conformance to specifications,
design error, violation of standards, poor communication with the customer).
3. Using the Pareto principle (80 percent of the defects can be traced to 20 per- cent of all possible causes),
isolate the 20 percent (the "vital few").
4. Once the vital few causes have been identified, move to correct the problems that have caused the
defects.
A software engineering organization collects information on defects for a period of one year.
Some of the defects are uncovered as software is being developed.
Others are encountered after the software has been released to its end-users. Although hundreds of
different errors are uncovered, all can be tracked to one (or more) of the following causes:
̶ incomplete or erroneous specifications (IES)
̶ misinterpretation of customer communication (MCC)
̶ intentional deviation from specifications (IDS)
̶ violation of programming standards (VPS)
̶ error in data representation (EDR)
̶ inconsistent component interface (ICI)
̶ error in design logic (EDL)
̶ incomplete or erroneous testing (IET)
̶ inaccurate or incomplete documentation (IID)
̶ error in programming language translation of design (PLT)
̶ ambiguous or inconsistent human/computer interface (HCI)
̶ miscellaneous (MIS)
Six Sigma: Refer Section 4 SQA standards
Measures of Reliability
A simple measure of reliability is meantime-between-failure (MTBF):
MTBF = MTTF + MTTR
Where the acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to- repair, respectively.
Many researchers argue that MTBF is a far more useful measure than other quality-related software
metrics. An end user is concerned with failures, not with the total defect count.
Because each defect contained within a program does not have the same failure rate, the total defect
count provides little indication of the reliability of a system.
An alternative measure of reliability is failures-in-time (FIT) a statistical measure of how many failures a
component will have over one billion hours of operation.
Software Safety
Software safety is a software quality assurance activity that focuses on the identification and assessment
of potential hazards that may affect software negatively and cause an entire system to fail.
If hazards can be identified early in the software process, software design features can be specified that
will either eliminate or control potential hazards.
A modeling and analysis process is conducted as part of software safety.
Initially, hazards are identified and categorized by criticality and risk.
Although software reliability and software safety are closely related to one another, it is important to
understand the subtle difference between them.
Software reliability uses statistical analysis to determine the likelihood that a software failure will occur.
However, the occurrence of a failure does not necessarily result in a hazard or accident.
Software safety examines the ways in which failures result in conditions that can lead to an accident.
(4) The quality standards ISO 9000 and 9001, Six Sigma, CMM
ISO 9001
In order to bring quality in product & service, many organizations are adopting Quality Assurance System.
ISO standards are issued by the International Organization for Standardization (ISO) in Switzerland.
Proper documentation is an important part of an ISO 9001 Quality Management System.
ISO 9001 is the quality assurance standard that applies to software engineering.
It includes, requirements that must be present for an effective quality assurance system.
ISO 9001 standard is applicable to all engineering discipline.
The requirements delineated by ISO 9001:2000 address topics such as
̶ Management responsibility
̶ quality system
̶ contract review
̶ design control
̶ document
̶ data control
̶ product identification
̶ Traceability
̶ process control
̶ inspection
̶ Testing
̶ preventive action
̶ control of quality records
̶ internal quality
̶ Audits
̶ Training
̶ Servicing
Statistical techniques.
In order for a software organization to become registered to ISO 9001:2000, it must establish policies and
procedures to address each of the requirements just noted (and others) and then be able to demonstrate
that these policies and procedures are being followed.
Six Sigma
Six sigma is “A generic quantitative approach to improvement that applies to any process.”
“Six Sigma is a disciplined, data-driven approach and methodology for eliminating in any process from
manufacturing to transactional and from product to service.”
To achieve six sigma a process must not produce more than 3.4 defects per million opportunities.
5 Sigma -> 230 defects per million
4 Sigma -> 6210 defects per million
Six sigma have two methodologies
(2) Re-Engineering
When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering.
It is a thorough process where the design of software is changed and programs are re-written.
Legacy software cannot keep tuning with the latest technology available in the market.
As the hardware become obsolete, updating of software becomes a headache.
Even if software grows old with time, its functionality does not.
For example, initially UNIX was developed in assembly language. When language C came into existence,
UNIX was re-engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more maintenance than
others and they also need re-engineering.
Re-Engineering Process
Decide what to re-engineer. Is it whole software or a part of it?
Perform Reverse Engineering, in order to obtain specifications of existing software.
Restructure Program if required. For example, changing function-oriented programs into object-oriented
programs. Re-structure data as required.
Apply Forward engineering concepts in order to get re-engineered software.
SCIs (Software Configuration Item) flow outward through these layers throughout their useful life,
ultimately becoming part of the software configuration of one or more versions of an application or
system.
As an SCI moves through a layer, the actions implied by each SCM task may or may not be applicable. For
example, when a new SCI is created, it must be identified.
However, if no changes are requested for the SCI, the change control layer does not apply. The SCI is
assigned to a specific version of the software (version control mechanisms come into play).
A record of the SCI (its name, creation date, version designation, etc.) is maintained for configuration
auditing purposes and reported to those with a need to know.
In the sections that follow, we examine each of these SCM process layers in more detail.
Version Control
Version control combines procedures and tools to manage different versions of configuration objects
that are created during the software process.
A version control system implements or is directly integrated with four major capabilities:
1. A project database (repository) that stores all relevant configuration objects.
2. A version management capability that stores all versions of a configuration object.
3. A make facility that enables you to collect all relevant configuration objects and construct a specific
version of the software.
In addition, version control and change control systems often implement an issues tracking capability
that enables the team to record and track the status of all outstanding issues associated with each
configuration object.
A number of version control systems establish a change set—a collection of all changes (to some baseline
configuration) that are required to create a specific version of the software.
A change set “captures all changes to all files in the configuration along with the reason for changes and
details of who made the changes and when.”
A number of named change sets can be identified for an application or system.
This enables you to construct a version of the software by specifying the change sets (by name) that must
be applied to the baseline configuration.
To accomplish this, a system modeling approach is applied. The system model contains:
1. A template that includes a component hierarchy and a “build order” for the components that
describes how the system must be constructed.
2. Construction rules.
3. Verification rules.
The primary difference in approaches is the sophistication of the attributes that are used to construct
specific versions and variants of a system and the mechanics of the process for construction.
Change Control
For a large software project, uncontrolled change rapidly leads to chaos.
For such projects, change control combines human procedures and automated tools to provide a
mechanism for the control of change.
A change request is submitted and evaluated to assess technical merit, potential side effects, overall
impact on other configuration objects and system functions, and the projected cost of the change.
The results of the evaluation are presented as a change report, which is used by a change control
authority
(CCA)—a person or group that makes a final decision on the status and priority of the change.
An engineering change order (ECO) is generated for each approved change.
The ECO describes the change to be made, the constraints that must be respected, and the criteria for
review and audit.