Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Software Engineering Book

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 111

Shree Mahaveerai Namah

Chapter 1
Software Engineering Introduction

 Computer software is a product or program code developed by software scientists.


 The applications of computer software are: Telecommunication, military, medical sciences,
online shopping, office products, IT industry, and many more. Today usage of computers
has become ubiquitous.
 A Software consists of source code, data and the related documents, manuals to work with
software system.
 The software is the key element in all computer based systems and products.
 The main purpose behind software engineering is to give a framework for building a
software system ( a software product) with best quality.

Software engineering definitions

 The establishment and use of sound engineering principles in order to obtain economical
software that is reliable and works efficiently on real machines.
 Software engineering is a systematic and disciplined approach towards the development of
the software operation and maintenance.
 Software engineering is an engineering branch associated with the development of software
product using well-defined scientific principles, methods and procedures.

Characteristics of a software

 Software should achieve a good quality in design and meet all the specifications of the
customer.
 Software does not wear out i.e. it does not lose the material.
 Software systems may be inherently simple or complex.
 Software must be efficient i.e. the ability of the software to use system resources in an
effective and efficient manner.
 Software must be integral i.e. it must prevent from unauthorized access to the software or
data.
Software engineering - Layered technology

 Software engineering is a fully layered technology.


 To develop a software, we need to go from one layer to another.3
 All these layers are related to each other and each layer demands the fulfillment of the
previous layer.

Figure 1 – 1 Four layers for software development

The layered technology consists of:

1. Quality focus
The characteristics of good quality software are:

 Correctness of the functions required to be performed by the software.

 Maintainability of the software

 Integrity i.e. providing security so that the unauthorized user cannot access
information or data.

 Usability i.e. the efforts required to use or operate the software.

2. Process

 It is the base layer or foundation layer for the software engineering.

 The software process is the key to keep all levels together.

 It defines a framework that includes different activities and tasks.


 In short, it covers all activities, actions and tasks required to be carried out for
software development.

 The processes that deal with the technical and management of software development
are collectively called software processes.

 There are two distinct categories of software process- i) Technical processes and ii)
Management processes.

 The technical processes specifies all engineering activities and the management
processes specifies how to plan, monitor and control technical processes so that, cost,
timing, quality and productivity goals are met.

3. Methods

 The method provides the answers of all 'how-to' that are asked during the process.

 It provides the technical way to implement the software.

 It includes collection of tasks starting from communication, requirement analysis,


analysis and design modelling, program construction, testing and support.

4. Tools

 The software engineering tool is an automated support for the software development.

 The tools are integrated i.e the information created by one tool can be used by the
other tool.

 For example: The Microsoft publisher can be used as a web designing tool.

CMM
Capability Maturity Model (CMM) is for improving software processes during software
development. The Software Engineering Institute’s (SEI) CMMprovides a well known
benchmark for judging the quality level of development processes of a company. CMM and
CMM levels are good bench-mark for a company to study the processes and practices
followed by the company. It allows a company to judge maturity in handling complex
software development task and achieve quality software goal.

To understand CMM, one must understand the word ‘mature’ in the context of software
development and delivering product to a client. Being mature means a company should able
to

 See larger picture with its ramifications,


 Able to identify the problem(s),
 Find solution and make good rational decisions based on balanced scenario.

Thus CMM is a methodology used to develop and refine a company’s software


development processes.

The model describes a five-level path of increasing maturity which reflects company’s
capability of executing software processes for development, including production version.
The results of such evaluation indicate the likely performance of the company if the company
is awarded software development work. CMM is similar to ISO 9001, which specify an
effective quality for software development and maintenance.

The main differencebetween CMM and ISO 9001 lies in their respective aims. ISO 9001
specifies a minimal acceptable quality level for software processes, while CMM establishes a
framework for continuous software process improvement. CMM is used to assess a company
against a scale of five process maturity levels based on certain Key Process Areas (KPA).The
five levels are (IMDQO)

i. Initial (Chaotic and ad hoc)

ii. Managed( for projects and reactive)

iii. Defined ( for company and proactive)

iv. Quantitatively Managed (measured and controlled)

v. Optimizing (further improvement)

Figure 1 – 1 Diagrammatic representation of CMMI 1 to 5 level.

Table 1 - 1 Shows state of software process at different CMM level.

CMM Level Description


CMM Level-1 Initial, undefined and chaotic approach to software development.
CMM Level-2 Repeatable stable performance, basic software processes for each project in
place. Reactive to different project scenarios (demands).
CMM Level-3 The company has well defined process for all projects and activity. All
details well documented.
CMM Level-4 Software processes are managed through company standards. Data on
software metrics and quality are collected for setting standards.
CMM Level-5 Using knowledge databases, softwaredevelopment processes are improved
with innovative ideas.

Table 1 – 2 CMM levels characteristics and KPAs

CMM level Characterstics KPAs


1(Initial) Chaotic, unpredictable, high risk of non- Key areas nt defined
performance
2(Repeatable) Methodical in umbrella activities and Requirement analysis. Project
processes. Performance is repeated but planning, sub-contract and out
not improved. sourcing. SQA and low level
configuration management. Main
umbrella activities are in use.
3(Defined) Improved performance in cost, schedule Company processes in place.
and risk management. Integrated management.
Company focus on training, HR
management and quality
initiatives. Supporting software
developing with customer centric
approach.
4(Managed) Positive learning curve from project Process management with the
experience and improvement in all key goal of efficiency and
areas of the project. effectiveness. Cost-benefit ratios.
5(Continuous Continuous improvement in all processes Process choice for each project
improvement through learning. High performance in linked to characteristics,
) all quality attributes and RMMM for risk. verification, testing and
maintainability of the project.
Goal is to improve cost-benefit
ratio.

Some important definitions:

Software Process : A set of activities, methods, practices and performance that people
use to develop and maitain soaftware and its associated products. (SEI- CMM).

Actual Process: The actual process is what you do with all omission, mistakes and
oversights in developing software.

Process capability and maturity

Capability:
---The range of expected results that can be obtained by following software processes.

---Means of predicting the most likely outcome to be expected from the next software
project.

Maturity:

___Extent to which all software processes are defined, measured, monitered, controlled
and implemented.

Figure 1 - 2 Expected improvement in company performance after implementing CMM

How do companies get CMM levels?

Evaluation of the maturiy level of a company is done through SEI by using software
capability evaluation questionaire. It includes analysis of processes, interviews, scrutiny of
documents, manuals, study of project management processes and company processes.

Umbrella Activities
While developing software, it is important to write the correct code. It is equally important to
take measurements to track the development of the application and make the necessary
changes for any improvement that might be needed. These framework activities in software
engineering fall under the category of umbrella activi9ties.

Software Project Tracking And Control


The framework of the project is well planned, keeping in mind the time frame and deadlines.
However, the deadlines are also committed to the client or the user.

Therefore, it is equally important to stand by this timing. Software project tracking is a


measure taken by software scientist to track and control the timing and schedule of the
project.

Risk Management

Risk management, as the name suggests, involves analyzing the potential risks of the
application in terms of quality or outcome. These risks may create a drastic impact on the
outcome and the functionality of the application.

Completing a Full-Stack Developer certification can help reduce the risk (My
recommendation). The course helps software scientist become better in their work, thus
reducing the risk associated with software development. Risk is financial losses that are likely
to occur if project is deviated from its planned execution, resulting in bad quality or
completely fail.

Software Quality Assurance

Testing the quality of the software is necessary. It helps to determine how the application will
do when used or launched in the market. On quality testing, the client is assured that
everything in the application went as planned. This is known as the software quality
framework in software science.

Formal Technical Reviews

Evaluation of errors is done at every step of the generic process framework for software
engineering. This helps to eliminate a major blunder at the end of the process.

The software engineers check their work for technical and quality issues and find technical
bugs or glitches. Step-wise error analyses help in the smooth functioning of software
development.

Software Configuration Management

The software configuration process helps in managing the application when changes are
necessary.

Measurement

The software engineers collect relevant data that is analyzed to measure the quality and
development of the application.
Chapter 2
SDLC Process Activities

The System Development Life Cycle, "SDLC" for short, is a multistep, iterative process,
structured in a methodical way. This process is used to model or provide a framework for
technical and non-technical process activities to deliver a quality system which meets or
exceeds a business’ expectations or manage decision-making progression.
Phase 1: SRS It is a skilled process. It needs experienced software scientist to gather
information about customer expectation without any mistakes. It is a document which
contains all specifics about what proposed software system is going to deliver to the
customer. It also specifies what can not be done in Business Process Automation (BPA) by
software.

Information gathering from client is an important part of this phase (core function). Usually
it starts with a meeting between two teams, one from client side and other from software
vendor. In these first meeting broad understanding is developed on both side for proposed
software system. The vendor team learns about stack holder, end-user and basic level
knowledge of client team members. The client team learns about project manager and other
team members and their skill sets. Subsequent meetings are between smaller and specific
groups. SRS team from vendor side starts interview, questionnaire and original company
documents including minutes of meetings.

Phase 2: Planning

In the planning phase, project goals are determined and a high-level plan for the requirement
for project is established. Planning is the most fundamental and critical organizational phase.
The three primary activities involved in the planning phase are as follows:

 Identification of the system for development

 Feasibility assessment

 Creation of project plan

Phase 2: Analysis

In the analysis phase, end user business requirements are analyzed by a team headed by a
project manager. Project goals are converted into the defined system functions that the
organization intends to develop. The three primary activities involved in the analysis phase
are as follows:

 Creating blue-print for business requirement

 Creating process diagrams using UML

 Performing a detailed analysis


Business requirement analysis is the most crucial part at this level of SDLC. Business
requirements are a brief set of business functionalities that the system needs to meet in order
to be successful. Technical details such as the types of technology used in the implementation
of the system may or may not be defined in this phase. A sample business requirement might
look like “The system must create and store records for all the employees by their respective
department, region, and the designation”. This requirement is showing no such detail as to
how the system is going to implement this requirement, but rather what the system must do
with respect to the business.

Phase 3: Design

In the design phase, we describe the desired features and operations of the system. This phase
includes business rules, pseudo-code, screen interfaces/layouts, and other necessary
documentation. The two primary activities involved in the design phase are as follows:

 Designing of IT infrastructure

 Designing of system model

To avoid any crash, malfunction, or lack of performance, the IT infrastructure should have
solid foundations. In this phase, the specialist recommends the kinds of clients and servers
needed on a cost and time basis, and technical feasibility of the system. If the client does not
have such infrastructure, the project leader may suggest cloud solution. Also, in this phase,
the organization creates interfaces for user interaction. Other than that, data models/structure
and entity relationship diagrams (ERDs) are also created in the same phase.

Phase 4: Development(Implementation)

In the development phase, all the documents from the previous phase are transformed into the
actual system. The two primary activities involved in the development phase are as follows:

 Establishing IT infrastructure

 Writing code for databases and business logic/process.

In the design phase, only the blueprint of the IT infrastructure is provided, whereas in this
phase the organization actually purchases and installs the respective software and hardware in
order to support the IT infrastructure. If cloud solution is desired, appropriate cloud
infrastructure may be provisioned. Following this, the creation of the database and actual
code can begin to complete the system on the basis of given specifications.

Phase 5: Testing

In the testing phase, all the pieces of code are integrated and deployed in the testing
environment. Testers then follow Software Testing Life Cycle activities to check the system
for errors, bugs, and defects to verify the system’s functionalities work as expected or not,
often. The two primary activities involved in the testing phase are as follows:
 Writing test cases and creating use cases

 Execution of test cases

Testing is a critical part of software development life cycle. To provide quality software, an
organization must perform testing in a systematic way. Once test cases are written, the tester
executes them and compares the expected result with an actual result in order to verify the
system and ensure it operates correctly. Writing test cases and executing them manually is an
intensive task for any organization, which can result in the success of any business if
executed properly.

Phase 6: Deployment

During this next phase, the system is deployed to a real-life (the client’s) environment where
the actual user begins to operate the system. All data and components are then placed in the
production environment. This phase is also referred to as ‘delivery.’

Phase 7: Maintenance

In the maintenance phase, any necessary enhancements, corrections and changes will be
made to make sure the system continues to work and stay updated to meet the business goals
of the client. Customer satisfaction is of paramount importance. It is necessary to maintain
and upgrade the system from time to time so it can adapt to future needs. The three primary
activities involved in the maintenance phase are as follows:

 Support the system users

 System maintenance

 System changes and adjustment

Choosing the Best SDLC Model*

When selecting the best SDLC approach for your organization or company, it's important to
remember that one solution may not fit every scenario or business. Certain projects may run
best with a Waterfall approach, while others would benefit from the flexibility in the agile or
iterative models.

Before deploying an SDLC approach for your teams and staff, consider contacting a
knowledgeable IT consultant at Innovative Architects for advice. Our experts have seen how
the different models function best in different industries and corporate environments. We are
adept at finding a good fit for any situation.

Exercise

1. Describe various phases of a software project.


2. Explain various phases of a software project with brief description.
SDLC Models
Waterfall Model

Waterfall model is an example of a Sequential model. In this model, the software


development activity is divided into different phases and each phase consists of a series of
tasks and has different objectives.

Waterfall model is the pioneer of the SDLC processes. In fact, it was the first model which
was widely used in the software industry. It is divided into phases and output of one phase
becomes the input of the next phase. It is mandatory for a phase to be completed before the
next phase starts. In short, there is no overlapping of phases in Waterfall model.

In Waterfall, development of one phase starts only when the previous phase is complete.
Because of this nature, each phase of Waterfall model is quite precise and well defined. Since
the phases fall from higher level to lower level, like a waterfall, it’s named as Waterfall
model.

Figure 4.2.1.1 Waterfall model phases

1. Requirements analysis and specification: The aim of the requirement analysis and
specification phase is to understand the exact requirements of the customer and document
them properly. This phase consists of two different activities.

 Requirement gathering and analysis: First, all the requirements regarding the
software are gathered from the customer and then the gathered requirements are
analyzed. The goal of the analysis part is to remove incompleteness (an incomplete
requirement is one in which some parts of the actual requirements have been omitted)
and inconsistencies (inconsistent requirement is one in which some part of the
requirement contradicts with some other part).

 Requirement specification: These analyzed requirements are documented in a


software requirement specification (SRS) document. SRS document serves as a
contract between development team and customers. Any future dispute between the
customers and the developers can be settled by examining the SRS document.

 After Requirements Analysis ‘Feasibility Studies’ may be carried out.

2. Design: The aim of this phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language,
like Python, Java etc.

3. Implementation (Coding) and Unit testing: In the coding phase, software design is
translated into source code using suitable programming language. Thus each designed
module is coded. The aim of the unit testing phase is to check whether each module is
working properly.

4. Integration and System testing: Integration of different modules are undertaken soon
after they have been coded and unit tested. Integration of various modules is carried out
incrementally over a number of steps. During each integration step, previously planned
modules are added to the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and tested, the full working
system is obtained and system testing is carried out on this.

System testing consists of three different kinds of testing activities as described below:

 α-testing: α-testing is the system testing performed by the development team.

 β-testing: β-testing is the system testing performed by a friendly set of


customers.

 Acceptance testing: After the software has been delivered, the customers
perform the acceptance testing to determine whether to accept the delivered
software or to reject it.

5. Deployment: After the system is integrated and tested, it is deployed at the customer
location and on their infrastructure. This phase is also known as roll-out. The first one or two
weeks are very important after this phase, because ‘end-user’ is going to use the system in
real world environment and give his reaction.

6. Maintainence: Maintenance is the most important phase of a software life cycle. The
effort spent on maintenance is 60% of the total effort spent to develop a full software. There
are basically three types of maintenance:
 Corrective Maintenance: This type of maintenance is carried out to correct
errors that were not discovered during the product development phase.

 Preventive Maintenance: This type of maintenance is carried out to enhance


the functionalities of the system based on the customer’s request.

 Adaptive Maintenance: Adaptive maintenance is usually required for porting


the software to work in a new environment such as work on a new computer
platform or with a new operating system.

When to use SDLC Waterfall Model?

SDLC Waterfall model is used when--

 Requirements are stable and not changed frequently.

 An application is small.

 There is no requirement which is not understood or not very clear.

 The environment is stable.

 The tools and technology used are stable and not dynamic.

 Resources are well trained and are available.

Pros and Cons of Waterfall model:

Advantages of using Waterfall model are as follows:

 Simple and easy to understand and use.

 For smaller projects, Waterfall model works well and yields appropriate results.

 Since the phases are rigid and precise, one phase is done one at a time, it is easy to
maintain. Therefore, this model can be used as the basis for other iterative models.

 Process, actions and results are very well documented.

Disadvantages of using Waterfall model:

 Cannot adopt the changes in requirements


 It becomes very difficult to move back to the phase. For example, if the application
has now moved to the testing stage and there is a change in requirement, it becomes
difficult to go back and change it.

 Delivery of the final product is late as there is no prototype which is demonstrated


intermediately.

 For bigger and complex projects, this model is not good as the risk factor is higher.

 Not suitable for projects where requirements are changed frequently.

 Does not work for long and ongoing projects.

 Since the testing is done at a later stage, it does not allow identifying the challenges
and risks in the earlier phase so the risk mitigation strategy is difficult to prepare.

Conclusion:

In the Waterfall model, it is very important to take the sign-off of the deliverables of each
phase. As of today, most of the projects are moving with Agile and Prototype models,
Waterfall model still holds good for smaller projects. If requirements are straightforward and
testable, Waterfall model will yield the best results.

Questions:

1. What is SDLC and what is it used for?

2. What are the different types of SDLC models?

3.What are the different phases of the Waterfall model?

4.What are the advantages and shortcomings Waterfall model?

Evolutionary Process Models

 Evolutionary models are iterative (Repeat phases for refinement and improvement)
type models.

 They allow to develop more complete versions of the software.


Following are the evolutionary process models.

1. The prototyping model


2. The spiral model
3. Concurrent development model

1. The Prototyping model

 Prototype is defined as first or preliminary form, which provides core functionality of


the software system.

 Prototype model is a set of core (main) objectives for software.

 It does not identify the requirements like detailed input, output.

 It is software working model of limited functionality.

 In this model, working programs are quickly produced.

Figure 4.2.1.2 Basic structure and iteration of prototyping model

The Prototyping Model is one of the most popularly used Software Development Life Cycle
Models (SDLC models). This model is used when the customers do not know the exact
project requirements beforehand. In this model, a prototype of the end product is first
developed, tested and refined as per customer request, which provides core functionality of
the final product. This is refined as per the customer feedback and changes in requirements if
any.

There are 2 approaches for this model:

1. Rapid Throwaway Prototyping


This technique offers a useful method of exploring ideas and getting customer feedback for
each of them. In this method, a developed prototype need not necessarily be a part of the
ultimately accepted prototype. Customer feedback helps in preventing unnecessary design
faults and hence, the final prototype developed is of a better quality.

2. Evolutionary Prototyping

In this method, the prototype developed initially is incrementally refined on the basis of
customer feedback till it finally gets accepted. In comparison to Rapid Throwaway
Prototyping, it offers a better approach which saves time as well as effort. This is because
developing a prototype from scratch for every iteration of the process can sometimes be very
frustrating for the developers.

Advantages of Prototyping Model

 Prototype model need not know the detailed input, output, processes, adaptability of
operating system and full machine interaction.

 In the development process of this model, users are actively involved.

 The development process is the best platform to understand the system by the user.

 Errors are detected much earlier.

 Gives quick user feedback for better solutions.

 It identifies the missing functionality easily. It also identifies the confusing or difficult
functions.

Disadvantages of Prototyping Model:

 The client involvement is more and it is not always considered by the developer.

 It is a slow process because it takes more time for development.

 Many changes can disturb the rhythm of the development team.

 It is a thrown away prototype when the users are confused with it.

The Spiral model

 Spiral model is a risk and analysis driven process model.

 In spiral model, an alternate solution is provided if the risk is found in the risk
analysis, then alternate solutions are suggested and implemented.

 It is a combination of prototype and sequential model or Waterfall model.


 In one iteration, all activities are done; for large projects, there may be several
iterations, resulting in delay.

 During iterations, customer may change requirements resulting in complications.

Figure 4.2.1.3 Boehm’s risk based spiral model for software development

Advantages of Spiral Model

 It reduces amount of risk.

 It is good for large and critical projects.

 It gives strong approval from customer and documentation control.

 In spiral model, the software is produced early in the life cycle process.

Disadvantages of Spiral Model

 It can be costly to develop a software model.

 It cannot be used for small projects.

The concurrent development model


 The concurrent process model consists of activities moving from one state to another
state.

 There are two states 1. Awaiting changes state and 2. ‘Under development’ state. This
is shown in Figure 4.2.1.4.

Figure 4.2.1.4 Various states of concurrent model. Each block of the project undergoes
same method.

 The communication activity is completed in the first iteration and exits in the awaiting
changes state.

 The modeling activity completes its initial communication and then goes to the ‘under
development’ state.

 If the customer specifies a change in the requirement, then the modeling activity
moves from the ‘under development’ state into the ‘awaiting change’ state.

Advantages of the concurrent development model

 This model is applicable to all types of software development processes.

 It is easy to understand and use.

 It gives immediate feedback from testing, because each block is tested.


 It provides an accurate picture of the current state of a project.

Disadvantages of the concurrent development model

 It needs better communication between the team members. This may not be achieved
all the time.

 It requires to remember the status of the different activities.

Questions

Component Based Model

Component based software engineering uses almost similar kind of methods, tools, and
principles as used in software engineering. However, there are certain differences. The prime
difference is that the CBSE distinguishes the process of “component development” from that
of “system development with components” by focusing on questions related to components.

Building systems from components

The main idea behind this is the re-usability. That means, systems are built from pre-existing
components. However, there are certain consequences of using such an approach. Some of
the consequences are mentioned below.

1. The development processes of component-based systems are different from that of the
development processes of the components

2. Finding and evaluating the components, i.e. a new separate process is introduced

3. Activities in the processes and activities in a non-component-based approach are


different
333

Figure 4.2.1.5 Architecture of CBM for software development (engineering)

This model goes through SDLC phases in a slightly modified manner than normal
SDLCpahases. These are described below.

Requirements analysis and specification

It includes analyzing the solution to meet the requirements. The available components are
checked if they can fulfil the requirements.

System and software design

Similar to the above phase, it totally depends upon the availability of the components. A
particular component model should be able to integrate with the potential (which may be
used) components.

Implementation and unit testing

The system should be built by integration of the components. The concept of “glue code” is
used to specify the connection.

System Integration

The application components along with the standard infrastructure components of the
component framework are integrated. This is often called component deployment.

System verification and validation

Standard techniques should be used. For example, location of error is a specific problem in
component based approach. Here, components are of “black box” type and may be from
different vendors. A component may show error due to malfunctioning of another component
(side effects).
Operation support and maintenance

It is similar to the integration process. A system deploys a new modified component. In most
of the cases, an existing component is modified. However, a new version of the same
component can also be integrated.

Component Model Implementations and Services

The execution of the components should be supported by a run-time environment. The run-
time environment should be standardized. This includes both general and domain-specific
run-time services. General services include object creation, life cycle management, object-
persistence support, etc.

The development organizations are not adapted to the basic principles of component based
software engineering (CBSE). Hence, a component-based approach cannot be used for the
development processes. The approach uses reusability of existing components. This reduces
the implementation efforts significantly. However, it increases the efforts for system
verification. This has to be adjusted according to the development process. By studying the
case study of various industries, it is concluded that achieving a complete separation of the
development process is very difficult. Moreover, it puts a lot of load on the architectural
issues and system and component verification.

Questions

1. Which of the following is a basic element of the Ideal Component Model?

a. Rigorously Tested

b. Re-Usable

c. Documented

d. All of the answers are correct.

2. Which of the following is NOT a characteristic of CBSE?

a. Re-Uses Existing Components

b. Develops Everything From Scratch

c. Based on Identifying Functional Pieces

d. Combines Components

3. Which of the following truly describes Bridges?


a) Encapsulation whereby some components is encased within an alternative abstraction
b) Translation between assumptions of an arbitrary component to some provides
assumptions of some other arbitrary components
c) Incorporation of planning function that in effect results in run-time determination of the
translation
d) None of the mentioned

Incremental Model

In incremental model, the whole requirement is divided into various builds. Multiple
development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles
are divided up into smaller, more easily managed modules. Incremental model is a type of
software development model like V-model, Agile model etc.

In this model, each module passes through the requirements, design, implementation
and testing phases. A working version of software is produced during the first module, so
you have working software early on during the software life cycle. Each subsequent release
of the module adds function to the previous release. The process continues till the complete
system is achieved.

Figure 4.2.1.6 Shows life cycle of different builds which may be done simultaneously

Advantages of Incremental model:

 Generates working software quickly and early during the software life cycle.
 This model is more flexible – less costly to change scope and requirements.

 It is easier to test and debug during a smaller iteration.

 In this model customer can respond to each built.

 Lowers initial delivery cost.

 Easier to manage risk because risky pieces are identified and handled during it’d
iteration.

Disadvantages of Incremental model:

 Needs good planning and design.

 Needs a clear and complete definition of the whole system before it can be broken
down and built incrementally.

 Total cost is higher than Waterfall.

When to use the Incremental model:

 This model can be used when the requirements of the complete system are clearly
defined and understood before any development work starts.

 Major requirements must be defined; however, some details can evolve and change
with time.

 Such models are used where requirements are clear and can be implemented phase
wise. From the figure it’s clear that the requirements are divided into Build 1,
Build2……….BuildN , various build and delivered accordingly.

 There is a need to get a product to the market early.

 A new technology is being used.

 There are some high risk features and goals.

 Mostly such model is used in web applications and product based companies.

RAD is a Rapid Application Development model.

 RAD is a Rapid Application Development model.


 Using the RAD model, software product is developed in a short period of time.

 The initial activity starts with the communication between customer and developer.

 Planning depends upon the initial requirements and then the requirements are divided into
groups.

 Planning is more important to work together on different modules.

The RAD model consists of the following phases:

1) Business Modeling
2) Data modeling
3) Process modeling
4) Application generation
5) Testing and turnover

Figure 4.2.1.7 Rapid Application Development model

1) Business Modeling

 Business modeling consists of the flow of information between various functions in the
project.
For example, what type of information is produced by every function and which are the
functions to handle that information.
 It is necessary to perform complete business analysis to get the essential business
information.

2) Data modeling

 The information in the business modeling phase is refined into the set of objects and it is
essential for the business. The objects can be any form, for example JSON,XML etc.

 The attributes of each object are identified and define the relationship between objects.

3) Process modeling

 The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.

 The process description is created for adding, modifying, deleting or retrieving a data
object.

4) Application generation

 In the application generation phase, the actual system is built.

 To construct the software, automated tools may be used.

5) Testing and roll-out

 The prototypes are independently tested after each iteration so that the overall testing time
is reduced.

 The data flow and the interfaces between all the components are fully tested. Hence, most
of the programming components are already tested.

Advantages of RAD Model

 The process of application development and delivery are fast.

 This model is flexible if any changes are required and can be extended as well.

 Reviews are taken from the clients at the starting of the development; hence, there are
lesser chances to miss the requirements.

Disadvantages of RAD Model

 The feedback from the user is required at every development phase.

 This model is not a good choice for long term and large projects.
Questions

1.What are the different phases in RAD Model? What is RAD model?

Answer :

1. Business Modeling: The information flow among business functions is defined


by answering questions like - what information drives the business process,
what information is generated, who generates it, where does the information
go, who processes it and so on.
2. Data Modeling: The information collected from business modeling is
refined into a set of data objects (entities) that are needed to support the
business. Theattributes (character of each entity) are identified and the relation
between these data objects (entities) is defined.
3. Process Modeling: The data object defined in the data modeling phase are
transformed to achieve the information flow necessary to implement a
business function. Processing descriptions are created for adding, modifying,
deleting or retrieving a data object.

4. Application Generation: Automated tools are used to facilitate construction of the


software; even they use the 4th GL techniques.

5. Testing and Turnover: Many of the programming components have already been tested
since RAD emphasis reuse. This reduces overall testing time. But new components must be
tested and all interfaces must be fully exercised.

2. Explain Disadvantages of Rapid Application Development (RAD).

3. What are the advantages and disadvantages of RAD?


Shree Mahaveerai Namah

Chapter 5

Agile Methods

Pair Programming

Figure shows general model of agile methodology in which iteration is important for
release of different versions.

Agile Programming Best Practices

Agile teams, committed to frequent, regular, high-quality production, find themselves striving
to find ways to keep short-term and long-term productivity as high as possible. Proponents of
pair programming ("pairing") claim that it boosts long-term productivity by substantially
improving the quality of the code. But it is fair to say that for a number of reasons, pairing is
by far the most controversial and least universally-embraced of the agile programmer
practices.

Pairing Mechanics
Pairing involves having two programmers working at a single workstation. One programmer
"drives," operating the keyboard, while the other "navigates," watching, learning, asking,
talking, and making suggestions. In theory, the driver focuses on the code at hand: the syntax,
semantics, and algorithm. The navigator focuses less on that, and more on a level of
abstraction higher: the test they are trying to get to pass, the technical task to be delivered
next, the time elapsed since all the tests were run, the time elapsed since the last repository
commit, and the quality of the overall design. The theory is that pairing results in better
designs, fewer bugs, and much better spread of knowledge across a development team, and
therefore more functionality per unit time, measured over the long term.

Pair programming is widely adopted by some organizations and shunned by others. It is


always a topic for debate and people will have their preferences. We are all humans and there
are times when almost everyone can benefit from pair programming.

Spreading Knowledge

Certainly as a mentoring mechanism, pairing is hard to beat. If pairs switch off regularly (as
they should), pairing spreads several kinds of knowledge throughout the team with great
efficiency: codebase knowledge, design and architectural knowledge, feature and
problem domain knowledge, language knowledge, development platform knowledge,
framework and tool knowledge, refactoring knowledge*, and testing knowledge. There
is not much debate that pairing spreads these kinds of knowledge better than traditional code
reviews and less formal methods. So what productivity penalty, if any, do you pay for
spreading knowledge so well?

*Refactoring knowledgeCode - Refactoring is the process of clarifying and simplifying


the design of existing code, without changing its behavior(functionality of the code).
Agile teams maintain and extend their code a lot from iteration to iteration, and without
continuous refactoring, this is hard to do. This is because un-refactored code tends to
rot. Rot takes several forms: unhealthy dependencies between classes or packages, bad
allocation of class responsibilities, way too many responsibilities per method or class,
duplicate code, and many other varieties of confusion and clutter. Therefore we can say
is a process in which code of application is fine tuned with consistency so that
functionality of the code do not change.

Every time we change code without refactoring it, rot worsens and spreads. Code rot
frustrates us, costs us time, and unduly shortens the lifespan of useful systems. In an agile
context, it can mean the difference between meeting or not meeting an iteration deadline.

Refactoring code ruthlessly prevents rot, keeping the code easy to maintain and extend. This
extensibility is the reason to refactor and the measure of its success. But note that it is only
"safe" to refactor the code this extensively if we have extensive unit test suites of the kind we
get if we work Test-First. Without being able to run those tests after each little step in a
refactoring, we run the risk of introducing bugs. If you are doing true Test-Driven
Development (TDD), in which the design evolves continuously, then you have no choice
about regular refactoring, since that's how you evolve the design.

Pairing and Productivity

Research results and anecdotal reports seem to show that short-term productivity might
decrease modestly (about 15%), but because the code produced is so much better, long-term
productivity goes up. And certainly it depends on how you measure productivity, and over
what term. In an agile context, productivity is often measured in running, tested features
actually delivered per iteration and per release. If a team measures productivity in lines of
code per week, they may indeed find that pairing causes this to drop (and if that means fewer
lines of code per running, tested feature, that's a good thing!).

Productivity and Staff Turnover

Proponents of pairing claim that if you measure productivity across a long enough term to
include staff being hired and leaving, pairing starts to show even more value. In many
mainstream projects, expertise tends to accumulate in "islands of knowledge." Individual
programmers tend to know lots of important things that the other programmers do not know
as well. If any of these islands leaves the team, the project may be delayed badly or worse.
Part of the theory of pairing is that by spreading many kinds of knowledge so widely within a
team, management reduces their exposure to this constant threat of staff turnover. In Extreme
Programming, they speak of the Truck Number: the number of team members that would
need to be hit by a truck to kill the project. Extreme Programming projects strive to keep the
Truck Number as close as possible to the total team size. If someone leaves, there are usually
several others to take his or her place. It is not that there is no specialization, but certainly
everyone knows more about all of what is going on. If you measure productivity in terms of
features delivered over several releases by such a team, it should be higher than if pairing
does not occur.

Pairing Strategies

In by-the-book Extreme Programming, all production code is written by pairs. Many non-XP
agile teams do not use pairing at all. But there is lots of middle ground between no pairing
and everyone pairing all the time. Try using pairing when mentoring new hires, for extremely
high-risk tasks, at the start of a new project when the design is new, when adopting a new
technology, or on a rotating monthly or weekly basis. Programmers who prefer to pair might
be allowed to, while those who do not are allowed not to. The decision to use code reviews
instead of any pairing at all is popular, but we don't know of any reason not to at least
experiment with pairing. There is no reasonable evidence that it hurts a team or a project, and
there is increasing evidence that it is a helpful best practice.

Scrum Team

An agile team in a Scrum environment often still includes people with traditional software
engineering titles such as programmer, designer, tester, or architect.
But on a Scrum team, everyone on the project works together to complete the set of work
they have collectively committed to complete within a sprint, regardless of their official title
or preferred job tasks.

Because of this, Scrum teams develop a deep form of camaraderie and a feeling that “we're
all in this together.”

When becoming a Scrum team member, those who in the past fulfilled specific traditional
roles tend to retain some of the aspects of their prior role but also add new traits and skills as
well. New roles in a Scrum team are the ScrumMaster or product owner.

A typical Scrum team is three to nine people. Rather than scaling by having a large team,
Scrum projects scale through having teams of teams. Scrum has been used on projects with
over 1,000 people. A natural consideration should, of course, be whether you can get by with
fewer people.

Although it's not the only thing necessary to scale Scrum, one well-known technique is the
use of a “Scrum of Scrums” meeting. With this approach, each Scrum team proceeds as
normal, but each team identifies one person who attends the Scrum of Scrums meeting to
coordinate the work of multiple Scrum teams.

These meetings are analogous to the daily Scrum meeting, but do not necessarily happen
every day. In many organizations, having a Scrum of Scrums meeting twice a week is
sufficient.

What is extreme programming (XP)?


Extreme programming is an Agile project management methodology that targets speed and
simplicity with short development cycles and less documentation. The process structure is
determined by five guiding values, five rules, and 12 XP practices .

Like other Agile methods, XP is a software development methodology broken down into
work sprints§. Agile frameworks follow an iterative process—you complete and review the
framework after every sprint, refine it for maximum efficiency, and adjust to changing
requirements. Similar to other Agile methods, XP’s design allows developers to respond to
customer stories, adapt, and change in real-time. But XP is much more disciplined, using
frequent code reviews and unit testing to make changes quickly. It’s also highly creative
and collaborative, prioritizing teamwork during all development stages.

§ A sprint is a short, time-boxed period when a scrum team works to complete a set amount
of work. Sprints are at the very heart of scrum and agile methodologies, and getting sprints
right will help your agile team ship better software with fewer headaches. Agile is a set of
principles and scrum is a framework for getting it done.

“With scrum, a product is built in a series of iterations called sprints that break down big,
complex projects into bite-sized pieces," said Megan Cook, Group Product Manager for Jira
Software at Atlassian.
5 values of extreme programming
Extreme programming is value driven. Instead of using external motivators, XP allows your
team to work in a less complicated way (focusing on simplicity and collaboration over
complex designs), all based on these five values.

1. Simplicity

Before starting any extreme programming work, first ask yourself: What is the simplest thing
that also works? The “that works” part is a key differentiator—the simplest thing is not
always practical or effective. In XP, your focus is on getting the most important work done
first. This means you’re looking for a simple project that you know you can accomplish.

2. Communication

XP relies on quick reactivity and effective communication. In order to work, the team needs
to be open and ‌honest with one another. When problems arise, you are expected to speak up.
The reason for this is that other team members will often already have a solution. And if they
don’t, you will come up with one faster as a group than you would alone.

3. Feedback

Like other Agile methodologies, XP incorporates user stories and feedback directly into the
process. XP’s focus is producing work quickly and simply, then sharing it to get almost
immediate feedback. As such, developers are in almost constant contact with customers
throughout the process. In XP, you launch frequent releases to gain insights early and often.
When you receive feedback, you will adapt the process to incorporate it (instead of the
project). For example, if the feedback relieves unnecessary lag time, you’d adjust your
process to have a pair of developers improve lag time(latency) instead of adjusting the project
as a whole.

4. Courage

XP requires a certain amount of courage. You’re always expected to give honest updates on
your progress, which can get pretty vulnerable. If you miss a deadline in XP, your team lead
likely won’t want to discuss why. Instead, you’d tell them you missed the deadline, hold
yourself accountable, and get back to work.

If you're a team lead, your responsibility at the beginning of the XP process is to set the
expectation for success and define "done." There is often little planning for failure because
the team focuses on success. However, this can be scary, because things won’t always go as
planned. But if things change during the XP process, your team is expected to adapt and
change with it.

5. Respect

Considering how highly XP prioritizes communication and honesty, it makes sense that
respect would be important. In order for teams to communicate and collaborate effectively,
they need to be able to disagree. But there are ways to do that kindly. Respect is a good
foundation that leads to kindness and trust—even in the presence of a whole lot of honesty.
For extreme programming, the expectations are:

 Mutual respect between customers and the development team.

 Mutual respect between team members.

 A recognition that everyone on the team brings something valuable to the project.

5 rules of the extreme programming methodology


The values of extreme programming are the more philosophical aspects. The rules, on the
other hand, are the practical uses for how the work gets done. You’ll need both to run an
effective XP team.

1. Planning

In the planning stages of XP, you’re determining if the project is viable and the best fit for
XP. To do this, you’ll look at:

 User stories to see if they match the simplicity value and check in to ensure that the
customer is available for the process. If the user story is more complex, or it’s made
by an anonymous customer, it likely won’t work for XP.

 The business value and priority of the project to make sure that this falls in line with
“getting the most important work done first.”

 What stage of development you’re in. XP is best for early stage development, and
won’t work as well for later iterations.

§ See when you should use XP .

Once you’ve confirmed the project is viable for XP, ‌create a release schedule—but keep in
mind that you should be releasing early and often to gain feedback. To do this:

 Break the project down into iterations and create a plan for each one.

 Set realistic deadlines and a sustainable pace.

 Share updates as they happen, which empowers your team to be honest and
transparent.

 Share real-time updates that help the team ‌identify, adapt, and make changes more
quickly.
 Use a project management tool to create a Kanban board§ or timeline to track your
progress in real-time.

§ A Japanese manufacturing system in which the supply of components is regulated


through the use of an instruction card sent along the production line.

 an instruction card used in a kanban system.

2. Managing

One of the key elements of XP is the physical space. XP purists recommend using an open
workspace where all team members work in one open room. Because XP is so collaborative,
you’ll benefit from having a space where you can physically come together. But that’s not
always practical in this day and age. If you work on a remote team, consider using
a platform that encourages asynchronous work for remote collaboration. This way, all
members can continue to work on the project together, even if they’re not physically together.

As in other Agile methods, use daily standups meetings to check-in and encourage constant,
open communication. You’ll want to use both a weekly cycle and quarterly cycle. During
your quarterly cycle, you and your team will review stories that will guide your work. You’ll
also study your XP process, looking for gaps(gap analysis) or opportunities to make changes.
Then, you’ll work in weekly cycles, which each start with a customer meeting. The customer
chooses the user story they want programmers to work on that week.

As a manager or team lead, your focus will be on maintaining work progress, measuring the
pace, shifting team members around to address bugs or issues as they arise, or changing the
XP process to fit your current project and iteration. Remember, the goal of XP is to be
flexible and take action, so your work will be highly focused on the team’s current work and
reactive to any changes.

3. Designing

When you’re just starting out with extreme programming, begin with the simplest possible
design, knowing that later iterations will make them more complex. Do not add in early
functionality at this stage to keep it as bare bones as possible (almost like prototyping).

XP methodology teams will often use class-responsibility-collaboration (CRC) cards to show


how each object in the design interacts. By filling out each field in the card, you’ll get a
visual interaction of all the functions as they relate and interact. CRC cards
include(something like Kanban cards):

 Class (collection of similar objects)

 Responsibilities (related to the class)(methods in some cases).


 Collaborators (class that interacts with this one)

CRCs are useful for stimulating the process and spotting potential problems. Regardless of
how you design, you’ll want to use a system that reduces potential bottlenecks. To do this, be
sure you’re proactively looking for risks. As soon as a potential threat emerges, assign one to
two team members to find a solution in the event that the threat takes place.

4. Coding

One of the more unique aspects of XP is that you’ll stay in constant contact with the customer
throughout the coding process. This partnership ‌allows you to test and incorporate feedback
within each iteration, instead of waiting until the end of a sprint. But coding rules are fairly
strict in XP. Some of these rules include:

 All code must meet coding standards.

 Using a unit test to nail down requirements and develop all aspects of the project.

 Programming as a pair—two developers work together simultaneously on the same


computer. This doesn’t add any time, but rather uses double the focus to produce the
highest quality results.

 Use continuous integrations to add new code and immediately test it.

 Only one pair can update code at any given time to reduce errors.

 Collective code ownership—any member of the team can change your code at any
time.

5. Testing

You should be testing throughout the extreme programming process. All code will need to
pass unit tests before it’s released. If you discover bugs during these tests, you’ll create new,
additional tests to fix them. Later on‌, you’ll configure the same user story you’ve been
working on into an acceptance test. During this test, the customer reviews the results to see
how well you translated the user story into the product.

When should you use extreme programming?

Because extreme programming focuses on software development, it is used by software


teams, it only works in certain settings. To get the most value out of extreme programming,
it’s best to use it when you:

 Manage a smaller team. Because of its highly collaborative nature, XP works best on
smaller teams of under 10 people.
 Are in constant contact with your customers. XP incorporates customer requirements
throughout the development process, and even relies on them for testing and approval.

 Have an adaptable team that can embrace change (without hard feelings). By its very
nature, extreme programming will often require your whole team to toss out their hard
work. There are also rules that allow other team members to make changes at any
time, which doesn’t work if your team members mig ht take that personally.

 Are well versed in the technical aspects of coding. XP isn’t for beginners. You need
to be able to work and make changes quickly.

Question

1.What is the difference and similarity between Agile and Scrum?

2.State some of the Agile quality strategies.

Answer: Some of the Agile quality strategies are –

 Iteration

 Re-factoring

 Dynamic code analysis

 Short feedback cycles

 Reviews and inspection

 Standards and guidelines

 Milestone reviews

3.Is there any drawback of the Agile model? If yes, explain.

4. What is the role of the Scrum Master?

5. User requirements are expressed as __________ in Extreme Programming.


a) implementation tasks
b) functionalities
c) scenarios
d) none of the mentioned
Explanation: User requirements are expressed as scenarios or user stories. These are written
on cards and the development team break them down into implementation tasks. These tasks
are the basis of schedule and cost estimates.

6. Which four framework activities are found in the Extreme Programming(XP) ?


a) analysis, design, coding, testing
b) planning, analysis, design, coding
c) planning, design, coding, testing
d) planning, analysis, coding, testing

Shree Mahaveerai Namah

Chapter 3

UML (Unified Modelling Language)


UML is a visual language that provides a way for software engineers and scientist
construct, document and visualize software systems. While UML is not a programming
language, it can provide visual representations that help software developers better
understand potential outcomes or errors in programs.

The UML diagrams are made from elements of object oriented concepts. Software
developers use UML to create successful models and designs for properly functioning
systems. This simplifies the software development process. After developers finish writing
the code, they draw the UML diagrams to document different workflows and activities and
delegate roles. This helps them make informed decisions about which systems to develop and
how to do so efficiently. UML

Types of UML structural diagrams

UML contains 13 types of diagrams that software developers and other professionals draw
and use. These diagrams separate into two groups:

Structural diagrams

Structural diagrams show a system's structure, implementation levels, individual parts and
how these components interact to create a functional system. There are six kinds of structure
diagrams, including:

1. Class diagrams: Class diagrams are the foundational principle of any object-oriented
software system, and depict classes and sets of relationships between classes. The
classes within a system or operation depict its structure, which helps software
engineers and developers identify the relationship between each object.

2. Component diagrams: These diagrams indicate the organizational structure of


physical elements in a software system. They help engineers and developers
understand whether the systems need additional improvements or if their original
structure performs efficiently.

3. Composite structure diagrams: This diagram shows the internal structure of a class
and how it communicates with other parts of the system.

4. Deployment diagrams: Deployment diagrams show software engineers and


developers what hardware components are available and which type of software
components can efficiently run them. They're beneficial when software is distributed
or used across multiple machines with diverse configurations.

5. Object diagrams: Object diagrams show the relationship between the functions in a
system, along with their characteristics.
6. Package diagrams: Packages are the various levels that may contribute to a system's
architecture, and package diagrams help engineers and developers organize their
UML diagrams into groups that make them easier to understand.

Behavioral diagrams

Behavioral diagrams show how a proper system should function. Explore these seven kinds
of behavioral diagrams:

1. Use case diagrams: A use case diagram shows the parts of the system or
functionality of a system and how they relate to each other. It gives developers a clear
understanding of how things function without needing to look into implementation
details.

2. Activity diagrams: These diagrams depict the flow of control in a system and may be
useful as a reference to follow when executing a use case diagram. Activity diagrams
can also depict the causes of a particular event.

3. Sequence diagram: This diagram shows how objects communicate with each other
sequentially. It's commonly used by software developers to document and understand
the requirements needed to establish new systems or gain more knowledge about
existing systems.

4. Communication diagrams: The communication diagrams show the sequential


exchange of messages between objects. These diagrams are similar to sequence
diagrams, but communication diagrams offer more flexibility.

5. Interaction overview diagram: This diagram uses different kinds of interaction


programs to show the sequence of actions. These diagrams help developers change
complex interactions into simple events.

6. State machine diagrams: These diagrams describe how objects behave in different
ways in their present state.

7. Time diagram: Time diagrams are a type of sequence diagram used to show the
behavior of objects over time. Developers use them to illustrate duration constraints
and the time to control changes in objects' behavior.

Class diagram

 A class diagram

 expresses class definitions to be implemented

 lists name, attributes, and methods for each class


 shows relationships between classes

 UML allows different levels of detail on both the attributes and the methods of one
class

 could be just the class name in a rectangle

 or like the general form shown on the next diagram

Software Specification (Class Name)


Attribute
Attribute : Initial Value
Class Attribute
Common for all instance of the class
Derived Attribute

Method ()
Method(parameter 1, parameter 2…….)
Method ( Parameter 1, parameter 2= initial value,….)
Relationships :

 Three Relationships in UML

1) Dependency

2) Association

3) Generalization

1) Dependency: A Uses Relationship

 Dependencies

 occurs when one object depends on another

 if you change one object's interface, you need to change the dependent
object

 arrows point from dependent to needed objects


Figure 2- Shows dependencies.

Credit/Debit card reader

ATM Machine Account Balance

Amount Request

User/LogIn

Alexa Simulator Verbal Command

Resources

Associations

 Associations imply

Our knowledge that a relationship must be preserved for some time (0.01 ms to forever)

 Between what objects do we need to remember a relationship?

 Does a Transaction need to remember Account? NO

 Would AccountCollection need to remember Accounts? Yes

AccountCollection 1 0 * Accounts
What is a collaboration diagram?
A collaboration diagram, also known as a communication diagram, is an illustration of the
relationships and interactions among software objects in the Unified Modeling Language
(UML). Developers can use these diagrams to portray the dynamic behavior of a
particular use case and define the role of each object.

To create a collaboration diagram, first identify the structural elements required to carry out
the functionality of an interaction. Then build a model using the relationships between those
elements. Several vendors offer software for creating and editing collaboration diagrams.

Notations of a collaboration diagram


A collaboration diagram resembles a flowchart that portrays the roles, functionality and
behavior of individual objects as well as the overall operation of the system in real time. The
four major components of a collaboration diagram include the following:

1. Objects. These are shown as rectangles with naming labels inside. The naming label
follows the convention of object name: class name. If an object has a property or state
that specifically influences the collaboration, this should also be noted.

2. Actors. These are instances that invoke the interaction in the diagram. Each actor has a
name and a role, with one actor initiating the entire use case.

3. Links. These connect objects with actors and are depicted using a solid line between two
elements. Each link is an instance where messages can be sent.

4. Messages between objects. These are shown as a labeled arrow placed near a link. These
messages are communications between objects that convey information about the activity
and can include the sequence number.

The most important objects are placed in the center of the diagram, with all other
participating objects branching off. After all objects are placed, links and messages should be
added in between.
Use Case Diagram
Use case diagram is a behavioural UML diagram type and frequently used to analyze various
systems. They enable you to visualize the different types of roles(users) in a system and how
those roles interact with the system.

Importance of Use Case Diagrams

 To identify functions and how users interact with them – The primary purpose of
use case diagrams.

 For a high-level view of the system – Especially useful when presenting to managers
or stakeholders. You can highlight the roles(users) that interact with the system and
the functionality provided by the system without going deep into inner workings of
the system.

 To identify internal and external factors – This might sound simple but in large
complex projects a system can be identified as an external role in another use case.

Use Case Diagram objects

Use case diagrams consist of 4 objects.


 Actor
 Use case
 System
 Package
These objects are further explained below.

Actor
Actor in a use case diagram is any entity
that performs a role in one given system.
This could be a person, organization or an
external system and usually drawn like
skeleton shown below.
Use Case
A use case represents a function or an
action within the system. It’s drawn as an
oval and named with the function.

System
The system is used to define the scope of the
use case and drawn as a rectangle. This an
optional element but useful when you’re
visualizing large systems. For example, you
can create all the use cases and then use the
system object to define the scope covered by
your project. Or you can even use it to show
the different areas covered in different
releases.
Package
The package is another optional element that
is extremely useful in complex diagrams.
Similar to class diagrams( From OOPS),
packages are used to group together use
cases. They are drawn like the image shown
here.

When it comes to analyzing the requirement of a system(SRS) use case diagrams are second
to none. A use case diagram mainly consists of actors, use cases and relationships.
Actors

 Give meaningful business relevant


names for actors – For example, if your
use case interacts with an outside
organization it’s much better to name it
with the function rather than the
organization name. (Eg: Airline Company
is better than AirIndia)
 Primary actors should be to the left
side of the diagram – This enables you to
quickly highlight the important roles in
the system.
 Actors model roles (not positions) – In a
hotel both the front office executive and
shift manager can make reservations. So
something like “Reservation Agent”
should be used for actor name to highlight
the role.(action or function).
 External systems are actors – If your
use case is send-email and if interacts
with the email management software then
the software is an actor to that particular
use case.
 Actors don’t interact with other
actors – In case actors interact within a
system you need to create a new use case
diagram with the system in the previous
diagram represented as an actor.
 Place inheriting actors below the
parent actor – This is to make it more
readable and to quickly highlight the use
cases specific for that actor.
Use Cases
 Names begin with a verb – A
use case models an
action(function) so the name
should begin with a verb.
 Make the name descriptive – Conditional extension of use case
This is to give more information Included use case
for others who are looking at the
diagram. For example
“Calculate Tax” is better than
“Calculate”.
 Highlight the logical order –
For example, if you’re
analyzing a bank customer
typical use cases include open
account ,deposit, withdraw and
request statement. Showing
them in the logical order makes
more sense.
 Place included use cases to the
right of the invoking use
case – This is done to improve
readability and add clarity.
 Place inheriting use case
below parent use case – Again
this is done to improve the
readability of the diagram.

Relationships

 Arrow points to the base use case when using <<extend>>

 <<extend>> can have optional extension conditions


 Arrow points to the included use case when using <<include>>

 Both <<extend>> and <<include>> are shown as dashed arrows.

 Actor and use case relationship don’t show arrows.

Example of use cases:

Example 1

A food delivery service mobile app

In this use case scenario, a food delivery mobile application wants to expand to include more
food and drink establishments, even if some establishments have a limited menu.

Deliver the Good dishes, a food delivery service, wants to grow the number of offered
establishments and aims to include coffee shops and convenience stores. The software
developers need to determine how the newly featured establishments benefit from current
software parameters and what user thresholds might prompt the software through to the next
stage. The team runs use cases like:

 UC1: A customer searching for a specific brand item not found in the area or selected
establishment. For example In Jio Grocery App, customer wants to find Unibic
products.

 UC2: A customer with a low amount total bill, a prompt for a minimum purchase
message(or how much more purchases be made to avoid delivery charges) for not
levying delivery charges.

 UC3:A feature to allow customers to click "Order again," getting a previously


purchased selection delivered again with quick user friendly interaction.

As details are given here, proposed system is an improvements over an existing e-commerce
web app. The three use cases described above can be applied to all e-retailer web apps.

Example 2

An airline's online booking system

In this use case example, Air India wants to refresh its online booking system, offering more
complex fare options and ancillary revenue options and additional optional services, like
online check-in etc..

Air India software engineers design a refreshed fare reservation page, complete with tiered
fare selection, with extra options like lounge access, free flight change or cancel abilities and
complimentary checked bags. It also allows customer (account holders) to pay in credit,
debit, online payment platforms or by Air India loyalty program miles. The software
engineers conduct several use cases to establish how the booking flow works and identify
potential concerns. They run use cases that include:
 Use Case : To book a ticket between starting point and destination.
 Primary Actor : Customer wishes to travel.
 Precondition: Customer must have logged in.
 Main success action

1. A customer browsing flight schedules and fare prices.

2.A customer selecting a flight date and time.

3. A customer adding on lounge access and free checked bags.

4. A customer paying with a personal credit card.

5. A customer paying with Air India loyalty miles.

Exception Scenario:

A customer wants to rebook previously travelled itinerary.

https://creately.com/guides/sequence-diagram-tutorial/

What is a Sequence Diagram?

Sequence diagrams, commonly used by developers, model the interactions between objects in

a single use case. They illustrate how the different parts of a system interact with each other

to carry out a function, and the order in which the interactions occur when a particular use

case is executed.

In simpler words, a sequence diagram shows how different parts(objects) of a system work

in a ‘sequence’ to get something done.

Sequence diagrams are commonly used in software development to illustrate the behavior of

a system or to help developers design and understand complex systems. They can be used to

model both simple and complex interactions between objects, making them a useful tool for

software architects, designers, and developers.


Sequence Diagram Notations

A sequence diagram is structured in such a way that it represents a timeline that begins at the

top and descends gradually to mark the sequence of interactions. Each object has a column

and the messages exchanged between them are represented by arrows.

A Quick Overview of the Various Parts of a Sequence Diagram


Lifeline Notation

A sequence diagram is made up of several


of these lifeline notations that should be
arranged horizontally across the top of the
diagram. No two lifeline notations should
overlap each other. They represent the
different objects or parts that interact with
each other in the system during the
sequence.

A lifeline notation with an actor element symbol is used when


the particular sequence diagram is owned by a use case.

More variations in lifeline


A lifeline with a boundary And a lifeline with a control
A lifeline with an entity
element indicates a system element indicates a
element represents system
boundary/ software element controlling entity or
data. For example, in a
in a system; for example, manager. It organizes and
customer service application,
user interface screens, schedules the interactions
the Customer entity would
database gateways or menus between the boundaries and
manage all data related to a
that users interact with, are entities and serves as the
customer.
boundaries. mediator between them.

Activation Bars

The activation bar is the box placed on the lifeline. It is used to indicate that an object is

active (or instantiated) during an interaction between two objects. The length of the rectangle

indicates the duration of the objects staying active.

In a sequence diagram, an interaction between two objects occurs when one object sends a

message to another. The use of the activation bar on the lifelines of the Message Caller (the

object that sends the message) and the Message Receiver (the object that receives the

message) indicates that both are active/ are instantiated during the exchange of the message.
Message Arrows

An arrow from the Message Caller to the Message Receiver specifies a message in a

sequence diagram. A message can flow in any direction; from left to right, right to left, or

back to the Message Caller itself. While you can describe the message being sent from one

object to the other on the arrow, with different arrowheads you can indicate the type of

message being sent or received.

The message arrow comes with a description, which is known as a message signature, on it.

The format for this message signature is below. All parts except the message_name are

optional.

attribute = message_name (arguments): return_type

 Synchronous message

As shown in the activation bars example, a synchronous message is used when the sender

waits for the receiver to process the message and return before carrying on with another

message. The arrowhead used to indicate this type of message is a solid one, like the one

below.
 Asynchronous message

An asynchronous message is used when the message caller does not wait for the receiver to

process the message and return before sending other messages to other objects within the

system. The arrowhead used to show this type of message is a line arrow as shown in the

example below.

 Return message

A return message is used to indicate that the message receiver is done processing the message

and is returning control over to the message caller. Return messages are optional notation

pieces, for an activation bar that is triggered by a synchronous message always implies a

return message.

Message caller Message receiver

Forward (Request) message Return (Response) message

Tip: You can avoid cluttering up your diagrams by minimizing the use of return messages

since the return value can be specified in the initial message arrow itself.
 Participant creation message

Objects do not necessarily live for the entire duration of the sequence of events. Objects or

participants can be created according to the message that is being sent.

The dropped participant box notation can be used when you need to show that the particular

participant did not exist until the create call was sent. If the created participant does

something immediately after its creation, you should add an activation box right below the

participant box.

 Participant destruction message

Likewise, participants when no longer needed can also be deleted from a sequence

diagram. This is done by adding an ‘X’ at the end of the lifeline of the said participant.

 Reflexive message

When an object sends a message to itself, it is

called a reflexive message. It is indicated with a

message arrow that starts and ends at the same

lifeline as shown in the example below.

Comment

UML diagrams generally permit the annotation of comments in all UML diagram types. The

comment object is a rectangle with a folded-over corner as shown below. The comment can

be linked to the related object with a dashed line.


Note: View Sequence Diagram Best Practices to learn about sequence fragments.

How to Draw a Sequence Diagram


A sequence diagram represents the scenario or flow of events in one single use case. The
message flow of the sequence diagram is based on the narrative of the particular use case.

Then, before you start drawing the sequence diagram or decide what interactions should be
included in it, you need to draw the use case diagram and ready a comprehensive description
of what the particular use case does.

From the above use case diagram example of ‘Create New Online Library Account’, we will
focus on the use case named ‘Create New User Account’ to draw our sequence diagram
example.

Before drawing the sequence diagram, it’s necessary to identify the objects or actors that
would be involved in creating a new user account. These would be;

 Librarian

 Online Library Management system

 User credentials database

 Email system

Once you identify the objects, it is then important to write a detailed description of what the
use case does. From this description, you can easily figure out the interactions (that should go
in the sequence diagram) that would occur between the objects above, once the use case is
executed.

Here are the steps that occur in the use case named ‘Create New Library User Account’.

 The librarian request the system to create a new online library account

 The librarian then selects the library user account type

 The librarian enters the user’s details

 The user’s details are checked using the user Credentials Database

 The new library user account is created

 A summary of the new account’s details is then emailed to the user

From each of these steps, you can easily specify what messages should be exchanged
between the objects in the sequence diagram. Once it’s clear, you can go ahead and start
drawing the sequence diagram.

The sequence diagram below shows how the objects in the online library management system
interact with each other to perform the function ‘Create New Library User Account’.

Stakeholders have many issues to manage, so it's important to communicate with clarity and
brevity. Activity diagrams help people on the business and development sides of an
organization come together to understand the same process and behavior. One uses a set of
specialized symbols—including those used for starting, ending, merging, or receiving steps in
the flow—to make an activity diagram, which I will cover in more depth within this activity
diagram guide.

Activity Diagrams Symbols

These activity diagram shapes and symbols are some of the most common types you'll find in UML
diagrams.

Symbol Name Description

Represents the beginning of a process or workflow in an activity


Start symbol diagram. It can be used by itself or with a note symbol that explains
the starting point.

Indicates the activities that make up a modelled process.


Activity
These symbols, which include short descriptions within the shape,
symbol
are the main building blocks of an activity diagram.

Shows the directional flow, or control flow, of the activity. An


Connector
incoming arrow starts a step of an activity; once the step is
symbol
completed, the flow continues with the outgoing arrow.

Joint
Combines two concurrent activities and re-introduces them to a flow
symbol/
where only one activity occurs at a time. Represented with a thick
Synchroniza
vertical or horizontal line.
tion bar

Splits a single activity flow into two concurrent activities. Symbolized


Fork symbol
with multiple arrowed lines from a join.

Represents a decision and always has at least two paths branching


Decision out with condition text to allow users to view options. This symbol
symbol represents the branching or merging of various flows with the symbol
acting as a frame or container.

Allows the diagram creators or collaborators to communicate


Note
additional messages that don't fit within the diagram itself. Leave
symbol
notes for added clarity and specification.
Example of Activity diagram

Student filling university form.

 An applicant wants to fill university form.


 The applicant filled out Enrollment Form online.
 The form validation activity checks validations.
 If form is filled out properly, student gets message to attend university orientation
program.
 Students pays tuition fees and start attending seminars/lectures.
Fork Join

Figure shows activity diagram for university enrollment.

Activity diagrams present a number of benefits to users. Consider creating an activity


diagram to:
 Demonstrate the logic of an algorithm.
 Describe the steps performed in a use case.
 Illustrate a business process or workflow between users and the system.
 Simplify and improve any process by clarifying complicated use cases.
 Model software architecture elements, such as method, function, and operation.

Chapter 4
SRS and System design
Learning objectives
 A good SRS(Software Requirements Specification)is valuable to for implementing
quality project.
 Structure and components of SRS.
 The different activities in the process for good SRS.
 Attributes of an SRS and main components in SRS document.
 Use of use case and DFD for specifying functional requirements.

Definition

According to IEEE

(i) A condition of capability needed by a client to solve a problem or achieve specific goal.

(ii) A condition or capability that must be possessed by a software system to satisfy a


contract, benchmark, specification or any other mutually agreed created document.

One must understand that we are talking about capability of software system which is
proposed to be developed. Obviously, the form and structure of SRS documents may vary
depending on the SDLC model used for developing proposed system.

Requirement Gathering This step onwards the software development team works to carry on
the project. The team holds discussions with various stakeholders from problem domain and
tries to bring out as much information as possible on their requirements. The requirements are
contemplated and segregated into user requirements, system requirements and functional
requirements. The requirements are collected using a number of practices as given -
Software Development Life Cycle

 studying the existing or obsolete system and software,


 conducting interviews of users and developers,
 referring to the database or
 collecting answers from the questionnaires.

Structure and components of SRS


Many clients wishes to automate their business processes. This is for increasing productivity,
performance , precision, efficiency and effectiveness. So client approaches developer (a
company) to accomplish task. Often client do not understand software development or
processes and developer do not understand the client’s requirement and application area.
Once the software system is developed, it used by end-user. Acceptance by end-user is
equally important. Thus in any software project there are three stack holders, client,
developer and end-user. SRS provides bridge between them in the written and compiled
form which meets wish list of each stack holder.

As we said before SRS also depends on SDLC model proposed for the system development.
Therefore it is difficult to define a generic structure which can be applied t all kinds of
projects. Nevertheless it is possible to define structure outline and essential components of
SRS. Any SRS document should address following requirements which can be considered as
components of SRS.

1. Functionality.

2. Performance.

3. Constraints on system development.

4. GUI for end-user.

5. Security.

1. Functional requirements :- It specifies system1 behaviour, in particular following points.

(a) For each function it describes what should be the output for given input data.

(b)For each function it describes type of input data and source of it.

(c)For each function it describes input data, its source, unit of measurement and its valid
range.

(d)It specifies all operations to be performed on input data to obtain output data. For example
parameters affected by the operation, mathematical equation or logical operation and
validation of input and output data.

(e) It describes system response in abnormal condition, or invalid input data , which if at all
processed should produce invalid output data. For example railway reservation system should
not book a ticket for valid input data if seat is not available. Another example an order
processing system should not process valid order if item is not available in inventory.

2. Performance requirement:- Performance requirements are of two types. Static and


dynamic. Static requirements specifies number of terminals or user supported. Number of
files that can be processed and size of it if any. These are specification related to capacityof
the system. Dynamic requirements specify response time and throughput of the system.
Response time specifies expected time for completion of operation under given
circumstances. Throughput specifies the expected number of operation that can be performed
in unit time.

All these requirements should be specified in measurable terms. For example developer
should not state that “System response should be quick” .Instead it should read as “
Response time should be less than one second 96% of the time” .

3. Constraints on system development:- There are number of factors in client’s environment


that can put constraints on system design. An SRS should identify and specify all these
constraints. For example Hardware requirements, Back-up and recovery, Fault tolerance and
security concerns.

4. GUI (Graphic User Interface):- GUI requirements are becoming important. Al interactions
of software system with end-user, another software or hardware should be specified.

5. Security :- Specify any requirements regarding security or privacy issues surrounding use
of the product or protection of the data used or created by the product. Define any user
identity authentication requirements. Refer to any external policies or regulations containing
security issues that affect the product. Define any security or privacy certifications that must
be satisfied. Ensure all compliance required are mentioned in the document.

The SRS is a specification for a specific software product, program, or set of


applications that perform particular functions in a specific environment. It serves
several goals depending on who is writing it. First, the SRS could be written by the
client of a system. Second, the SRS could be written by a developer of the system.
The two methods create entirely various situations and establish different purposes
for the document altogether. The first case, SRS, is used to define the needs and
expectation of the users. The second case, SRS, is written for various purposes and
serves as a contract document between customer and developer.
Characteristics of good SRS

Above diagram courtesy https://www.javatpoint.com/software-requirement-specifications

Following are the features of a good SRS document:

1. Correctness: User review is used to provide the accuracy of requirements stated in the
SRS. SRS is said to be perfect if it covers all the needs that are truly expected from the
system. t

2. Completeness: The SRS is complete if, and only if, it includes the following elements:

(1). All essential requirements, whether relating to functionality, performance, design,


constraints, attributes, or external interfaces.

(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.

Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions
of all terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict with other requirements. There are three types of possible conflict in
the SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but in
another as textual.

(b) One condition may state that all lights shall be green while another states that all lights
shall be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,

(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.

(b) One condition may state that "A" must always follow "B," while other requires that "A
and B" co-occurs.

(3). Two or more requirements may define the same real-world object but use different terms
for that object. For example, a program's request for user input may be called a "prompt" in
one requirement's and a "cue" in another. The use of standard terminology and descriptions
promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.

5. Ranking for importance and stability: The SRS is ranked for importance and stability if
each requirement in it has an identifier to indicate either the significance or stability of that
particular requirement.

Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should
be identified to make these differences clear and explicit. Another way to rank requirements
is to distinguish classes of items as essential, conditional, and optional.

6. Modifiability: SRS should be made as modifiable as likely and should be capable of


quickly obtain changes to the system to some extent. Modifications should be perfectly
indexed and cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.

8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if
it facilitates the referencing of each condition in future development or enhancement
documentation.

There are two types of Traceability:

1. Backward Traceability: This depends upon each requirement explicitly referencing its
source in earlier documents.

2. Forward Traceability: This depends upon each element in the SRS having a unique name
or reference number.

The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary
to be able to ascertain the complete set of requirements that may be concerned by those
modifications.

9. Design Independence: There should be an option to select from multiple design


alternatives for the final system. More specifically, the SRS should not contain any
implementation details.

10. Testability: An SRS should be written in such a method that it is simple to generate test
cases and test plans from the report.

11. Understandable by the customer: An end user may be an expert in his/her explicit
domain but might not be trained in computer science. Hence, the purpose of formal notations
and symbols should be avoided too as much extent as possible. The language should be kept
simple and clear.

12. The right level of abstraction: If the SRS is written for the requirements stage, the
details should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be
used. Hence, the level of abstraction modifies according to the objective of the SRS.

Properties of a good SRS document

The essential properties of a good SRS document are the following:

Concise: The SRS report should be concise and at the same time, unambiguous, consistent,
and complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.

Structured: It should be well-structured. A well-structured document is simple to understand


and modify. In practice, the SRS document undergoes several revisions to cope up with the
user requirements. Often, user requirements evolve over a period of time. Therefore, to make
the modifications to the SRS document easy, it is vital to make the report well-structured.
Black-box view: It should only define what the system should do and refrain from stating
how to do these. This means that the SRS document should define the external behavior of
the system and not discuss the implementation issues. The SRS report should view the system
to be developed as a black box and should define the externally visible behavior of the
system. For this reason, the SRS report is also known as the black-box specification of a
system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.

Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have
been met in an implementation.
Entity/Relationship Modelling
 Entity/Relationship models consist :-

 Entities and Attributes.

 Relationships.

 Attributes.

 E/R Diagrams.

 Database Design

 Before we look at how to  Conceptual design


create and use a database we’ll
look at how to design one  Build a model independent of the
choice of DBMS.
 Need to consider
 Logical design
 What tables, keys, and
constraints are needed?  Create the database in a given
DBMS.
 What is the database
going to be used for?  Physical design

 How the database is stored in


hardware.

 E/R Modelling is used for conceptual design

 Entities – objects living or non-living or items of interest.

 Attributes - facts about, or properties or characteristics of an entity.

 Relationships - links between entities.

 Example

 In a University database we might have entities for Students,


Modules(admissions) and Lecturers. Students might have attributes such as
their ID, Name, and Course, and could have relationships with Modules
(admissions) and Lecturers (tutor/tutee).

 E/R Models are often represented as E/R diagrams that

 Give a conceptual view of the database.

 Are independent of the choice of DBMS.


 Can identify some problems in a design.

One-to-many relationship

Entities :-
 Entities represent objects or things of interest

 Physical things like students, lecturers, employees, products.

 More abstract things like modules, orders, courses, projects.

 Entities have

 A general type or class, such as Lecturer or Module.

 Instances of that particular type, such as Ashwin Mehta or Sidraa Khan are
instances of Lecturer.

 Attributes (such as name, email address).

Chapter 7
Metrics of Software Engineering
A computer program is an implementation of an algorithm considered to be a collection of
tokens which can be classified as either operators or operands. Halstead’s metrics are
included in a number of current commercial tools that count software lines of code. By
counting the tokens and determining which are operators and which are operands, the
following base measures can be collected :

n1 = Number of distinct operators.


n2 = Number of distinct operands.
N1 = Total number of occurrences of operators.
N2 = Total number of occurrences of operands.

In addition to the above, Halstead defines the following :


n1* = Number of potential operators.
n2* = Number of potential operands.
Halstead refers to n1* and n2* as the minimum possible number of operators and operands
for a module and a program respectively. This minimum number would be embodied in the
programming language itself, in which the required operation would already exist (for
example, in C language, any program must contain at least the definition of the function
main()), possibly as a function or as a procedure: n1* = 2, since at least 2 operators must
appear for any function or procedure : 1 for the name of the function and 1 to serve as an
assignment or grouping symbol, and n2* represents the number of parameters, without
repetition, which would need to be passed on to the function or the procedure.

Halstead metrics are :

 Halstead Program Length – The total number of operator occurrences and the total
number of operand occurrences.
N = N1 + N2
And estimated program length is, N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program length:
 NJ = log2(n1!) + log2(n2!)
 NB = n1 * log2n2 + n2 * log2n1
 NC = n1 * sqrt(n1) + n2 * sqrt(n2)
 NS = (n * log2n) / 2
 Halstead Vocabulary – The total number of unique operator and unique operand occur-
rences.
n = n1 + n2
 Program Volume – Proportional to program size, represents the size, in bits, of space ne-
cessary for storing the program. This parameter is dependent on specific algorithm imple-
mentation. The properties V, N, and the number of lines in the code are shown to be lin-
early connected and equally valid for measuring relative program size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size “bits”. It is the actual size
of a program if a uniform binary encoding for the vocabulary is used. And error =
Volume / 3000
 Potential Minimum Volume – The potential minimum volume V* is defined as the
volume of the most succinct program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
 Program Level – To rank the programming languages, the level of abstraction provided
by the programming language, Program Level (L) is considered. The higher the level of a
language, the less effort it takes to develop a program using that language.
L = V* / V
The value of L ranges between zero and one, with L=1 representing a program written at
the highest possible level (i.e., with minimum size).
And estimated program level is L^ =2 * (n2) / (n1)(N2)
 Program Difficulty – This parameter shows how difficult to handle the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the program level decreases
and the difficulty increases. Thus, programming practices such as redundant usage of op-
erands, or the failure to use higher-level control constructs will tend to increase the
volume as well as the difficulty.
 Programming Effort – Measures the amount of mental activity needed to translate the
existing algorithm into implementation in the specified program language.
E = V / L = D * V = Difficulty * Volume

 Language Level – Shows the algorithm implementation program language level. The
same algorithm demands additional effort if it is written in a low-level program language.
For example, it is easier to program in Pascal than in Assembler.
L’ = V / D
lambda = L * V* = L2 * V

 Intelligence Content – Determines the amount of intelligence presented (stated) in the


program This parameter provides a measurement of program complexity, independently
of the program language in which it was implemented.
I=V/D
 Programming Time – Shows time (in minutes) needed to translate the existing algorithm
into implementation in the specified program language.
T = E / (f * S)
The concept of the processing rate of the human brain, developed by the psychologist
John Stroud, is also used. Stoud defined a moment as the time required by the human
brain requires to carry out the most elementary decision. The Stoud number S is therefore
Stoud’s moments per second with:
5 <= S <= 20. Halstead uses 18. The value of S has been empirically developed from
psychological reasoning, and its recommended value for programming applications is 18.
Stroud number S = 18 moments / second
seconds-to-minutes factor f = 60
Counting rules for C language –

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( ) {…}, all control
statements e.g., if ( ) {…}, if ( ) {…} else {…}, etc. are considered as operators.
8. In control construct switch ( ) {case:…}, switch as well as all the case statements are con-
sidered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as op-
erators.
10. All the brackets, commas, and terminators are considered as operators.
11. GOTO is counted as an operator and the label is counted as an operand.
12. The unary and binary occurrence of “+” and “-” are dealt separately. Similarly “*” (multi-
plication operator) are dealt separately.
13. In the array variables such as “array-name [index]” “array-name” and “index” are con-
sidered as operands and [ ] is considered as operator.
14. In the structure variables such as “struct-name, member-name” or “struct-name -> mem-
ber-name”, struct-name, member-name are taken as operands and ‘.’, ‘->’ are taken as op-
erators. Some names of member elements in different structure variables are counted as
unique operands.
15. All the hash directive are ignored.
Example – List out the operators and operands and also calculate the values of software
science measures like

int sort (int x[ ], int n)

{
int i, j, save, im1;
/*This function sorts array x in ascending order */
If (n< 2) return 1;
for (i=2; i< =n; i++)
{
im1=i-1;
for (j=1; j< =im1; j++)
if (x[i] < x[j])
{
Save = x[i];
x[i] = x[j];
x[j] = save;
}
}
return 0;
}

Explanation –

operators occurrences operands occurrences

Int 4 sort 1

() 5 x 7

, 4 n 3

[] 7 i 8

If 2 j 7

< 2 save 3

; 11 im1 3

For 2 2 2

= 6 1 3

– 1 0 1

<= 2 – –

++ 2 – –
Return 2 – –

{} 3 – –

n1=14 N1=53 n2=10 N2=38

Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds

Advantages of Halstead Metrics:


 It is simple to calculate.
 It measures overall quality of the programs.
 It predicts the rate of error.
 It predicts maintenance effort.
 It does not require the full analysis of programming structure.
 It is useful in scheduling and reporting projects.
 It can be used for any programming language.

Disadvantages of Halstead Metrics:

 It depends on the complete code.


 It has no use as a predictive estimating model.

Cyclomatic complexity
Cyclomatic complexity of a code section is the quantitative measure of the number of
linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the Control Flow Graph of the program. The nodes in the
graph indicate the smallest group of commands of a program, and a directed edge in it
connects the two nodes i.e. if second command might immediately follow the first
command.
For example, if source code contains no control flow statement then its cyclomatic
complexity will be 1 and source code contains a single path in it. Similarly, if the source
code contains one if condition then cyclomatic complexity will be 2 because there will be
two paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside control flow is the edge
joining two basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,

M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components

Steps that should be followed in calculating cyclomatic complexity and test cases design
are:

 Construction of graph with nodes and edges from code.


 Identification of independent paths.
 Cyclomatic Complexity Calculation
 Design of Test Cases
Let a section of code as such:

A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C

Control Flow Graph of above code


The cyclomatic complexity calculated for above code will be from control flow graph. The
graph shows seven shapes(nodes), seven lines(edges), hence cyclomatic complexity is 7-
7+2 = 2.
Use of Cyclomatic Complexity:

 Determining the independent path executions thus proven to be very helpful for De -
velopers and Testers.
 It can make sure that every path have been tested at least once.
 Thus help to focus more on uncovered paths.
 Code coverage can be improved.
 Risk associated with program can be evaluated.
 These metrics being used earlier in the program helps in reducing the risks.
Advantages of Cyclomatic Complexity:.
 It can be used as a quality metric, gives relative complexity of various designs.
 It is able to compute faster than the Halstead’s metrics.
 It is used to measure the minimum effort and best areas of concentration for testing.
 It is able to guide the testing process.
 It is easy to apply.
Disadvantages of Cyclomatic Complexity:
 It is the measure of the programs’s control complexity and not the data complexity.
 In this, nested conditional structures are harder to understand than non-nested struc -
tures.
 In case of simple comparisons and decision structures, it may give a misleading figure.

Lines of code

line of code (LOC) is any line of text in a code that is not a comment or blank line,
and also header lines, in any case of the number of statements or fragments of statements on
the line. LOC clearly consists of all lines containing the declaration of any variable, and
executable and non-executable statements. As Lines of Code (LOC) only counts the volume
of code, you can only use it to compare or estimate projects that use the same language and
are coded using the same coding standards.
Features :
 Variations such as “source lines of code”, are used to set out a codebase.
 LOC is frequently used in some kinds of arguments.
 They are used in assessing a project’s performance or efficiency.
Advantages :
 Most used metric in cost estimation.
 Its alternates have many problems as compared to this metric.
 It is very easy in estimating the efforts.
Disadvantages :
 Very difficult to estimate the LOC of the final program from the problem specification.
 It correlates poorly with quality and efficiency of code.
 It doesn’t consider complexity.
Research has shown a rough correlation between LOC and the overall cost and length of
developing a project/ product in Software Development, and between LOC and the number
of defects. This means the lower your LOC measurement is, the better off you probably are
in the development of your product.

Introduction

There can be various methods to calculate function points; you can define your custom too
based on your specific requirements. But "Why re-invent the wheel?" when you already have
a tried and tested method given by IFPUG by their experiences and case study.

The method used to calculate function point is knows as FPA (Function Point Analysis). Also
I would define it in single line as "A Method of quantifying the size and complexity of a
software system in terms of the functions that the system delivers to the user". Let's start
learning how to calculate the function points.
Functionalities

Following functionalities are counted while counting the function points of the system.

 Data Functionality
o Internal Logical Files (ILF)
o External Interface Files (EIF)
 Transaction Functionality
o External Inputs (EI)
o External Outputs (EO)
o External Queries (EQ)

Now logically if you divide you software application into parts it will always come to one or
more of the 5 functionalities that are mentioned above. A software application cannot be
derived without using any one of the functionalities above.

Methodology of calculating the function points

We need to under stand a system first with respect to the function points for that consider an
application model as below for measuring the function points.

Now to calculate the function points we need to follow the followingsteps:

1. Measure the application boundary


a. The application boundary defines what is external to the application.
b. It is dependent on the users external business view of the application and not on the tech-
nical and/or implementation consideration
2.
Identify the data functionalities (ILF and EIF)
a. User identifiable group of data; logically related and maintained with in the boundary of
the application through one or more elementary process is know as ILF.
b. User identifiable group of data, logically related, referenced by the application but main-
tained with in the boundary of different application is known as EIF.

<shape id="_x0000_i1025" style="WIDTH: 252pt; HEIGHT: 80.25pt"


type="#_x0000_t75"><imagedatasrc="Calculating%20Function
%20Points_files/image003.png">

c. Few other terminologies of RET and DET are to be understood here as well to determine the
function points.
d. A RET (record element type) is a user recognizable subgroup of data elements with
as ILF or EIF
e. A DET (data element type) is a unique user recognizable, non-repeated field either main-
tained in an ILF or retrieved from an ILF or ELF.
3. Identify the transaction functionalities (EI, EO, EQ)
a. All the three Transactional functionalities are "elementary processes"
b. An Elementary Process is the smallest unit of activity that is meaningful to the user(s).
c. The elementary process must be self-contained and leave the business of the application in a
consistent state.
d. An EI (External Input) is an elementary process of the application which processes data
that enters from outside the boundary of the application. Maintains one or more ILF.
e. An EO (External Output) is an elementary process that generates data that exits the bound-
ary of the application (i.e. presents information to the user) through processing logic, re-
trieval of data through ILF or EIF. The processing logic contains mathematical calculations,
derived data etc.
f. An EQ (External Query) is an elementary process that results in retrieval of data that is sent
outside the application boundary (i.e. present information to the user) through retrieval of
data from ILF or EIF. The processing logic should not contain any mathematical formula,
derived data etc.
4. Using the above data we can calculate the UFP (Unadjusted Function Points)
a. After all the basic data & transactional functionalities of the system have been defined we
can use the following set of tables below to calculate the total UFP.
b. Now for each type of Functionality determine the UFP's based on the below table.
c. For EI's, EO's & EQ's determine the FTR's and DET's and based on that determine the
Complexity and hence the Number of UFP's it contributes. We have to calculate this
for all the EI's, EO's & EQ's.
External Inputs (EI)

File Type Referenced ">Data Elements (DET)


(FTR)
1-4 5-15 Greater than
15
Less than 2 Low (3) Low (3) Average (4)
2 Low (3) Average (4) High (6)
Greater than 2 Average (4) High (6) High (6)

External Outputs (EO)

">File Type Referenced ">Data Elements (DET)


(FTR)
1-5 6-19 Greater than
19
Less than 2 Low (4) Low (4) Average (5)
2 or 3 Low (4) Average (5) High (7)
Greater than 3 High (7) High (7) High (7)

External Inquiry (EQ)

">File Type Referenced ">Data Elements (DET)


(FTR)
1-5 6-19 Greater than
19
Less than 2 Low (3) Low (3) Average (4)
2 or 3 Low (3) Average (4) High (6)
Greater than 3 Average (4) High (6) High (6)
d. For ILF's & EIF's determine the RET's and DET's and based on that determine the Com-
plexity and hence the Number of UFP's it contributes. We have to calculate this
for all the ILF's & EIF's.

Internal Logical File (ILF)

">Record Element Types ">Data Elements (DET)


(RET)
1-19 20-50 51 or More

1 RET Low (7) Low (7) Average (10)


2 to 5 RET Low (7) Average High (15)
(10)
6 or more RET Average High (15) High (15)
(10)

External Interface File (EIF)


">Record Element Types ">Data Elements (DET)
(RET)
1-19 20-50 51 or More

1 RET Low (5) Low (5) Average (7)


2 to 5 RET Low (5) Average (7) High (10)
6 or more RET Average (7) High (10) High (10)
e. Once we have the score of all the Functionalities we can get the UFP as

UFP = Sum of all the Complexities of all the EI's, EO's EQ's, ILF's and EIF's

5. Further the calculation of VAF (Value added Factor) which is based on the TDI (Total De-
gree of Influence of the 14 General system characteristics)
a. TDI = Sum of (DI of 14 General System Characteristics) where DI stands for Degree of In -
fluence.
b. These 14 GSC are

1. Data Communication

2. Distributed Data Processing

3. Performance

4. Heavily Used Configuration

5. Transaction Role

6. Online Data Entry

7. End-User Efficiency

8. Online Update

9. Complex Processing

10. Reusability

11. Installation Ease

12. Operational Ease

13. Multiple Sites

14. Facilitate Change

c. These GSC are on a scale of 0-5


6. Once the TDI is determined we can put it in the formula below to get the VAF.
VAF = 0.65 + (0.01 * TDI)

7. Finally the Adjusted Function Points or Function Points are

FP = UFP * VAF

8. Now these FP's can be used to determine the Size of the Software, also can be used to quote
the price of the software, get the time and effort required to complete the software.
9. Effort in Person Month = FP divided by no. of FP's per month (Using your organizations
or industry benchmark)
10. Schedule in Months = 3.0 * person-month^1/3

For e.g. for a 65 person month project

Optimal Schedule = 3.0 * 65^1/3 ~ 12 months

Optimal Team Size = 65 / 12 ~ 5 or 6 persons.


Chapter 7

Software Design Principles


Software design principles are concerned with providing means to handle the complexity of the
design process effectively. Effectively managing the complexity will not only reduce the effort
needed for design but can also reduce the scope of introducing errors during design.

Following are the principles of Software Design

Problem Partitioning

For small problem, we can handle the entire problem at once but for the significant problem, divide
the problems and conquer the problem it means to divide the problem into smaller pieces so that
each piece can be captured separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning


1. Software is easy to understand

2. Software becomes simple

3. Software is easy to test

4. Software is easy to modify

5. Software is easy to maintain


6. Software is easy to expand

These pieces cannot be entirely independent of each other as they together form the system. They
have to cooperate and communicate to solve the problem. This communication adds complexity.

Note: As the number of partition increases = Cost of partition and complexity


increases

Abstraction

An abstraction is a tool that enables a designer to consider a component at an abstract level


without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.

Here, there are two common abstraction mechanisms

1. Functional Abstraction

2. Data Abstraction

Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.

Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis for
Object Oriented design approaches.

Modularity

Modularity specifies to the division of software into separate modules which are differently named
and addressed and are integrated later on in to obtain the completely functional software. It is the
only property that allows a program to be intellectually manageable. Single large programs are
difficult to understand and read due to a large number of reference variables, control paths, global
variables, etc. The desirable properties of a modular system are:

o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives. o Modules can be separately com-

piled and saved in the library. o Modules should be easier to use than to build. o

Modules are simpler from outside than inside.

Advantages and Disadvantages of Modularity


In this topic, we will discuss various advantage and disadvantage of Modularity.

Advantages of Modularity
There are several advantages of Modularity

o It allows large programs to be written by several or different people

o It encourages the creation of commonly used routines to be placed in the library and used
by other programs.
o It simplifies the overlay procedure of loading a large program into main storage.

o It provides more checkpoints to measure progress.

o It provides a framework for complete testing, more accessible to test o It produced the well

designed and more readable program.

Disadvantages of Modularity
There are several disadvantages of Modularity

o Execution time maybe, but not certainly, longer o Storage size perhaps, but is

not certainly, increased o Compilation and loading time may be longer o

Inter-module communication problems may be increased

o More linkage required, run-time may be longer, more source lines must be
written, and more documentation has to be done
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:

1. Functional Independence: Functional independence is achieved by developing


functions that perform only one kind of task and do not excessively interact with other modules. In-
dependence is important because it makes implementation more accessible and faster. The inde-
pendent modules are easier to maintain, test, and reduce error propagation and can be reused in
other programs as well. Thus, functional independence is a good design feature which ensures soft -
ware quality.

It is measured using two criteria:

o Cohesion: It measures the relative function strength of a module.

o Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamental of Information hiding suggests that modules


can be characterized by the design decisions that protect from the others, i.e., In other words, mod-
ules should be specified that data include within a module is inaccessible to other modules that do
not need for such information.

The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different locations
within the software.

Strategy of Design

A good system design strategy is to organize the program modules in such a method that are easy
to develop and latter too, change. Structured design methods help developers to deal with the size
and complexity of programs. Analysts generate instructions for the developers about how code
should be composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:

1. Top-down Approach

2. Bottom-up Approach

1. Top-down Approach: This approach starts with the identification of the main com-
ponents and then decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and
moves towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing sys -
tem.

Coupling and Cohesion


Module Coupling
In software engineering, the coupling is the degree of interdependence between software modules.
Two modules that are tightly coupled are strongly dependent on each other. However, two modules
that are loosely coupled are not dependent on each other. Uncoupled modules have no
interdependence at all within them.

The various types of coupling techniques are shown in fig:


A good design is the one that has low coupling. Coupling is measured by the number of relations
between the modules. That is, the coupling increases as the number of calls between modules
increase or the amount of shared data is large. Thus, it can be said that a design with high coupling
will have more errors.

Types of Module Coupling

1. No Direct Coupling: There is no direct coupling between M1 and M2.


In this case, modules are subordinates to different modules. Therefore, no direct coupling.

2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.

3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite
data items such as structure, objects, etc. When the module passes non-global data structure or
entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.

4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.

5. External Coupling: External Coupling arises when two modules share an externally im-
posed data format, communication protocols, or device interface. This is related to communica -
tion to external tools and devices.

6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code,
e.g., a branch from one module into another module.

Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.

Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low
cohesion."
Types of Modules Cohesion

1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of


a module, cooperate to achieve a single function.

2. Sequential Cohesion: A module is said to possess sequential cohesion if the element


of a module form the components of the sequence, where the output from one component
of the sequence is input to the next.

3. Communicational Cohesion: A module is said to have communicational cohesion,


if all tasks of the module refer to or update the same data structure, e.g., the set of func -
tions defined on an array or a stack.

4. Procedural Cohesion: A module is said to be procedural cohesion if the set of pur-


pose of the module are all parts of a procedure in which particular sequence of steps has to
be carried out for achieving a goal, e.g., the algorithm for decoding a message.

5. Temporal Cohesion: When a module includes functions that are associated by the
fact that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data out -
put, etc.

7. Coincidental Cohesion: A module is said to have coincidental cohesion if it per-


forms a set of tasks that are associated with each other very loosely, if at all.

Differentiate between Coupling and Cohesion

Coupling Cohesion

Cohesion is also called Intra-Module Binding.


Coupling is also called Inter-Module Binding.

Coupling shows the relationships between Cohesion shows the relationship within the mod-
modules. ule.

Cohesion shows the module's relative


Coupling shows the relative inde- functional strength.
pendence between the modules.

While creating, you should aim for low


coupling, i.e., dependency among modules While creating you should aim for high cohesion,
should be less. i.e., a cohesive component/ module focuses on a
single function (i.e., singlemindedness) with little
interaction with other modules of the system.

In coupling, modules are linked to the other


In cohesion, the module focuses on a single thing.
modules.

Function Oriented Design


Function Oriented design is a method to software design where the model is decomposed into a set
of interacting units or modules where each unit or module has a clearly defined function. Thus, the
system is designed from a functional viewpoint.
Design Notations
Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be represented
graphically or mathematically by the following:

Data Flow Diagram


Data-flow design is concerned with designing a series of functional transformations that convert
system inputs into the required outputs. The design is described as data-flow diagrams. These
diagrams show how data flows through a system and how the output is derived from the input
through a series of functional transformations.

Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They show
end-to-end processing. That is the flow of processing from when data enters the system to where it
leaves the system can be traced.

Data-flow design is an integral part of several design methods, and most CASE tools support data-
flow diagram creation. Different ways may use different icons to represent data-flow diagram
entities, but their meanings are similar.

The notation which is used is based on the following symbols:


The report generator produces a report which describes all of the named entities in a dataflow
diagram. The user inputs the name of the design represented by the diagram. The report generator
then finds all the names used in the data-flow diagram. It looks up a data dictionary and retrieves
information about each name. This is then collated into a report which is output by the system.
Data Dictionaries
A data dictionary lists all data elements appearing in the DFD model of a system. The data items
listed contain all data flows and the contents of all data stores looking on the DFDs in the DFD
model of a system.

A data dictionary lists the objective of all data items and the definition of all composite data
elements in terms of their component data items. For example, a data dictionary entry may
contain that the data grossPay consists of the parts regularPay and overtimePay.

grossPay = regularPay + overtimePay

For the smallest units of data elements, the data dictionary lists their name and their type.

A data dictionary plays a significant role in any software development process because of the
following reasons:

 A Data dictionary provides a standard language for all relevant information for use by engin-
eers working in a project. A consistent vocabulary for data items is essential since, in large
projects, different engineers of the project tend to use different terms to refer to the same
data, which unnecessarily causes confusion.
 The data dictionary provides the analyst with a means to determine the definition of vari-
ous data structures in terms of their component elements.

Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user
without the knowledge of internal design.

Structured Chart is a graphical representation which shows:


o System partitions into modules o Hierarchy of

component modules o The relation between

processing modules o Interaction between

modules o Information passed between modules

The following notations are used in structured chart:

Pseudo-code
Pseudo-code notations can be used in both the preliminary and detailed design phases.
Using pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.

Coding
The coding is the process of transforming the design of a system into a computer language format.
This coding phase of software development is concerned with software translating design
specification into the source code. It is necessary to write source code & internal documentation so
that conformance of the code to its specification can be easily verified.

Coding is done by the coder or programmers who are independent people than the designer. The
goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage.
The cost of testing and maintenance can be significantly reduced with efficient coding.
Goals of Coding
1. To translate the design of system into a computer language
format: The coding is the process of transforming the design of a system into a computer
language format, which can be executed by a computer and that perform tasks as specified
by the design of operation during the design phase.

2. To reduce the cost of later phases: The cost of testing and maintenance can be
significantly reduced with efficient coding.

3. Making the program more readable: Program should be easy to read and un-
derstand. It increases code understanding having readability and understandability as a
clear objective of the coding activity can itself help in producing more maintainable soft -
ware.

For implementing our design into code, we require a high-level functional language. A programming
language should have the following characteristics:

Characteristics of Programming Language

Following are the characteristics of Programming Language:

Readability: A good high-level language will allow programs to be written in some methods that
resemble a quite-English description of the underlying functions. The coding may be done in an
essentially self-documenting way.
Portability: High-level languages, being virtually machine-independent, should be easy to
develop portable software.

Generality: Most high-level languages allow the writing of a vast collection of programs, thus
relieving the programmer of the need to develop into an expert in many diverse languages.

Brevity: Language should have the ability to implement the algorithm with less amount of code.
Programs mean in high-level languages are often significantly shorter than their low-level
equivalents.

Error checking: A programmer is likely to make many errors in the development of a


computer program. Many high-level languages invoke a lot of bugs checking both at compile-time
and run-time.

Cost: The ultimate cost of a programming language is a task of many of its characteristics.

Quick translation: It should permit quick translation.

Efficiency: It should authorize the creation of an efficient object code.

Modularity: It is desirable that programs can be developed in the language as several separately
compiled modules, with the appropriate structure for ensuring self-consistency among these
modules.

Widely available: Language should be widely available, and it should be feasible to provide
translators for all the major machines and all the primary operating systems.

A coding standard lists several rules to be followed during coding, such as the way variables are to
be named, the way the code is to be laid out, error return conventions, etc.

Coding Standards
General coding standards refers to how the developer writes code, so here we will discuss some
essential standards regardless of the programming language being used.

The following are some representative coding standards:


1. Indentation: Proper and consistent indentation is essential in producing easy to read
and maintainable programs.
Indentation should be used to:

o Emphasize the body of a control structure such as a loop or a select statement.

o Emphasize the body of a conditional statement

o Emphasize a new scope block

2. Inline comments: Inline comments analyze the functioning of the subroutine, or key
aspects of the algorithm shall be frequently used.

3. Rules for limiting the use of global: These rules file what types of data can be
declared global and what cannot.

4. Structured Programming: Structured (or Modular) Programming methods shall be


used. "GOTO" statements shall not be used as they lead to "spaghetti" code, which is hard
to read and maintain, except as outlined line in the FORTRAN Standards and Guidelines.

5. Naming conventions for global variables, local variables, and con-


stant identifiers: A possible naming convention can be that global variable names al-
ways begin with a capital letter, local variable names are made of small letters, and con -
stant names are always capital letters.

6. Error return conventions and exception handling system: Different


functions in a program report the way error conditions are handled should be standard
within an organization. For example, different tasks while encountering an error condition
should either return a 0 or 1 consistently.
Coding Guidelines
General coding guidelines provide the programmer with a set of the best methods which can be
used to make programs more comfortable to read and maintain. Most of the examples use the C
language syntax, but the guidelines can be tested to all languages.

The following are some representative coding guidelines recommended by many software
development organizations.

1. Line Length: It is considered a good practice to keep the length of source code lines at or
below 80 characters. Lines longer than this may not be visible properly on some terminals and
tools. Some printers will truncate lines longer than 80 columns.

2. Spacing: The appropriate use of spaces within a line of code can improve readability.

Example:

Bad: cost=price+(price*sales_tax)
fprintf(stdout ,"The total cost is %5.2f\n",cost);

Better: cost = price + ( price * sales_tax )


fprintf (stdout,"The total cost is %5.2f\n",cost);

3. The code should be well-documented: As a rule of thumb, there must be at least


one comment line on the average for every three-source line.

4. The length of any function should not exceed 10 source lines: A very
lengthy function is generally very difficult to understand as it possibly carries out many various
functions. For the same reason, lengthy functions are possible to have a disproportionately lar-
ger number of bugs.

5. Do not use goto statements: Use of goto statements makes a program unstructured
and very tough to understand.

6. Inline Comments: Inline comments promote readability.

7. Error Messages: Error handling is an essential aspect of computer programming. This does
not only include adding the necessary logic to test for and handle errors but also involves making
error messages meaningful.

Programming Style
Programming style refers to the technique used in writing the source code for a computer program.
Most programming styles are designed to help programmers quickly read and understands the
program as well as avoid making errors. (Older programming styles also focused on conserving
screen space.) A good coding style can overcome the many deficiencies of a first programming
language, while poor style can defeat the intent of an excellent language.

The goal of good programming style is to provide understandable, straightforward, elegant code.
The programming style used in a various program may be derived from the coding standards or
code conventions of a company or other computing organization, as well as the preferences of the
actual programmer.

Some general rules or guidelines in respect of programming style:


1. Clarity and simplicity of Expression: The programs should be designed in such a
manner so that the objectives of the program is clear.

2. Naming: In a program, you are required to name the module, processes, and variable, and so
on. Care should be taken that the naming style should not be cryptic and nonrepresentative.

For Example: a = 3.14 * r * r


area of circle = 3.14 * radius * radius;

3. Control Constructs: It is desirable that as much as a possible single entry and single exit
constructs used.

4. Information hiding: The information secure in the data structures should be hidden from
the rest of the system where possible. Information hiding can decrease the coupling between
modules and make the system more maintainable.

5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior
of a program. It also becomes difficult to understand the program logic, so it is desirable to avoid
deep nesting.

6. User-defined types: Make heavy use of user-defined data types like enum, class, struc-
ture, and union. These data types make your program code easy to write and easy to under -
stand.

7. Module size: The module size should be uniform. The size of the module should not be too
big or too small. If the module size is too large, it is not generally functionally cohesive. If the
module size is too small, it leads to unnecessary overheads.

8. Module Interface: A module with a complex interface should be carefully examined.

9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the
program state. Such side-effect should be avoided where as possible.

What is Software Testing


Software testing is a process of identifying the correctness of a software by considering its all
attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution of
software components to find the software bugs or errors or defects.

Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the
client with information about the quality of the software.

Testing is mandatory because it will be a dangerous situation if the software fails any of time due to
lack of testing. So, without testing software cannot be deployed to the end user.

What is Testing
Testing is a group of techniques to determine the correctness of the application under the
predefined script but, testing cannot find all the defect of application. The main intent of testing is
to detect failures of the application so that failures can be discovered and corrected. It does not
demonstrate that a product functions properly under all conditions but only that it is not working in
some specific conditions.

Testing furnishes comparison that compares the behavior and state of software against mechanisms
because the problem can be recognized by the mechanism. The mechanism may include past
versions of the same specified product, comparable products, and interfaces of expected purpose,
relevant standards, or other criteria but not limited up to these.

Testing includes an examination of code and also the execution of code in various environments,
conditions as well as all the examining aspects of the code. In the current scenario of software
development, a testing team may be separate from the development team so that Information
derived from testing can be used to correct the process of software development.

The success of software depends upon acceptance of its targeted audience, easy graphical user
interface, strong functionality load test, etc. For example, the audience of banking is totally
different from the audience of a video game. Therefore, when an organization develops a software
product, it can assess whether the software product will be beneficial to its purchasers and other
audience.

Manual Testing
Manual testing is a software testing process in which test cases are executed manually without
using any automated tool. All test cases executed by the tester manually according to the end user's
perspective. It ensures whether the application is working as mentioned in the requirement
document or not. Test cases are planned and implemented to complete almost 100 percent of the
software application. Test case reports are also generated manually.

Manual Testing is one of the most fundamental testing processes as it can find both visible and
hidden defects of the software. The difference between expected output and output, given by the
software is defined as a defect. The developer fixed the defects and handed it to the tester for
retesting.

Manual testing is mandatory for every newly developed software before automated testing.
This testing requires great efforts and time, but it gives the surety of bug-free software. Manual
Testing requires knowledge of manual testing techniques but not of any automated testing tool.

Manual testing is essential because one of the software testing fundamentals is "100% automation
is not possible."

There are various methods used for manual testing. Each method is used according to its testing
criteria. Types of manual testing are given below:

Types of Manual Testing:


1. Black Box Testing

2. White Box Testing

3. Unit Testing

4. System Testing

5. Integration Testing

6. Acceptance Testing

How to perform Manual Testing


o First, tester examines all documents related to software, to select testing areas. o Tester
analyses requirement document to cover all requirements stated by the customer.

o Tester develops the test cases according to the requirement document.

o All test cases are executed manually by using Black box testing and white box testing.

o If bugs occurred then the testing team informs to the development team.
o Development team fixes bugs and handed software to the testing team for retesting.

Advantages of Manual Testing


o It does not require programming knowledge while using the Black box method.

o It is used to test dynamically changing GUI designs.

o Tester interacts with software as a real user so that they are able to discover usability and
user interface issues.
o It ensures that the software is a hundred percent bug-free. o It is cost effective. o Easy to

learn for new testers.

Disadvantages of Manual Testing


o It requires a large number of human resources. o It is very time-consuming.

o Tester develops test cases based on their skills and experience. There is no evidence that
they have covered all functions or not.
o Test cases cannot be used again. Need to develop separate test cases for each new soft -
ware.
o It does not provide testing on all aspects of testing.

o Since two teams work together, sometimes it is difficult to understand each other's
motives, it can mislead the process.

Manual testing tools


Selenium

Selenium is used to test the Web Application.

Appium

Appium is used to test the mobile application.

TestLink

TestLink is used for test management.

Postman

Postman is used for API testing.

Firebug

Firebug is an online debugger.


JMeter

JMeter is used for load testing of any application.

Mantis

Mantis is used for bug tracking.

Automation Testing
When the testing case suites are performed by using automated testing tools is known as
Automation Testing. The testing process is done by using special automation tools to control the
execution of test cases and compare the actual result with the expected result. Automation testing
requires a pretty huge investment of resources and money.

Generally, repetitive actions are tested in automated testing such as regression tests. The testing
tools used in automation testing are used not only for regression testing but also for automated GUI
interaction, data set up generation, defect logging, and product installation.

The goal of automation testing is to reduce manual test cases but not to eliminate any of them. Test
suits can be recorded by using the automation tools, and tester can play these suits again as per the
requirement. Automated testing suites do not require any human intervention.

Advantages of Automation Testing


o Automation testing takes less time than manual testing.

o A tester can test the response of the software if the execution of the same operation is re -
peated several times.
o Automation Testing provides re-usability of test cases on testing of different versions of the
same software.

o Automation testing is reliable as it eliminates hidden errors by executing test cases again in
the same way.
o Automation Testing is comprehensive as test cases cover each and every feature of the ap-
plication.

o It does not require many human resources, instead of writing test cases and testing them
manually, they need an automation testing engineer to run them.
o The cost of automation testing is less than manual testing because it requires a few human
resources.

Disadvantages of Automation Testing


o Automation Testing requires high-level skilled testers.

o It requires high-quality testing tools.


o When it encounters an unsuccessful test case, the analysis of the whole event is complic-
ated.
o Test maintenance is expensive because high fee license testing equipment is necessary.

o Debugging is mandatory if a less effective error has not been solved, it can lead to fatal res-
ults.

White Box Testing


The box testing approach of software testing consists of black box testing and white box testing. We
are discussing here white box testing which also known as glass box is testing, structural
testing, clear box testing, open box testing and transparent box testing.
It tests internal coding and infrastructure of a software focus on checking of predefined inputs
against expected and desired outputs. It is based on inner workings of an application and revolves
around internal structure testing. In this type of testing programming skills are required to design
test cases. The primary goal of white box testing is to focus on the flow of inputs and outputs
through the software and strengthening the security of the software.

The term 'white box' is used because of the internal perspective of the system. The clear box or
white box or transparent box name denote the ability to see through the software's outer shell into
its inner workings.

Test cases for white box testing are derived from the design phase of the software development
lifecycle. Data flow testing, control flow testing, path testing, branch testing, statement and decision
coverage all these techniques used by white box testing as a guideline to create an error-free
software.

White box testing follows some working steps to make testing manageable and easy to understand
what the next task to do. There are some basic steps to perform white box testing.

Generic steps of white box testing


o Design all test scenarios, test cases and prioritize them according to high priority number.

o This step involves the study of code at runtime to examine the resource utilization, not ac -
cessed areas of the code, time taken by various methods and operations and so on.
o In this step testing of internal subroutines takes place. Internal subroutines such as nonpub-
lic methods, interfaces are able to handle all types of data appropriately or not.

o This step focuses on testing of control statements like loops and conditional statements to
check the efficiency and accuracy for different data inputs.

o In the last step white box testing includes security testing to check all possible security loop-
holes by looking at how the code handles security.

Reasons for white box testing


o It identifies internal security holes. o To check the way of input inside

the code. o Check the functionality of conditional loops.

o To test function, object, and statement at an individual level.

Advantages of White box testing


o White box testing optimizes code so hidden errors can be identified. o Test cases of

white box testing can be easily automated.

o This testing is more thorough than other testing approaches as it covers all code
paths. o It can be started in the SDLC phase even without GUI.

Disadvantages of White box testing


o White box testing is too much time consuming when it comes to large-scale programming
applications.
o White box testing is much expensive and complex.

o It can lead to production error because it is not detailed by the developers.

o White box testing needs professional programmers who have a detailed knowledge and un-
derstanding of programming language and implementation.

Black box testing


Black box testing is a technique of software testing which examines the functionality of software
without peering into its internal structure or coding. The primary source of black box testing is a
specification of requirements that is stated by the customer.

In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if there
are severe problems, then it is given back to the development team for correction.

Generic steps of black box testing


o The black box test is based on the specification of requirements, so it is examined in the be-
ginning.

o In the second step, the tester creates a positive test scenario and an adverse test scenario
by selecting valid and invalid input values to check that the software is processing them cor-
rectly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases. o In the fifth step, the tester com-

pares the expected output against the actual output.

o In the sixth and final step, if there is any flaw in the software, then it is cured and tested
again.

Test procedure
The test procedure of black box testing is a kind of process in which the tester has specific
knowledge about the software's work, and it develops test cases to check the accuracy of the
software's functionality.

It does not require programming knowledge of the software. All test cases are designed by
considering the input and output of a particular function. A tester knows about the definite output
of a particular input, but not about how the result is arising. There are various techniques used in
black box testing for testing like decision table technique, boundary value analysis technique, state
transition, All-pair testing, cause-effect graph technique, equivalence partitioning technique, error
guessing technique, use case technique and user story technique. All these techniques have been
explained in detail within the tutorial.

Test cases
Test cases are created considering the specification of the requirements. These test cases are
generally created from working descriptions of the software including requirements, design
parameters, and other specifications. For the testing, the test designer selects both positive test
scenario by taking valid input values and adverse test scenario by taking invalid input values to
determine the correct output. Test cases are mainly designed for functional testing but can also be
used for non-functional testing. Test cases are designed by the testing team, there is not any
involvement of the development team of software.

Shree Mahaveerai Namah


Practical 1

EasyLeave

1 OBJECTIVE: This project is aimed at developing a web based Leave Management Tool,
which is of importance to either an organization or a college. The Easy Leave is an Intranet
based application that can be accessed throughout the Organization or a specified group/Dept.
This system can be used to automate the workflow of leave applications and their approvals.
The periodic crediting of leave is also automated. There are features like notifications,
cancellation of leave, automatic approval of leave, report generators etc in this Tool.
Functional components of the project: There are registered people in the system. Some are
leave approvers. An approver can also be a requestor. In an organization, the hierarchy could
be Engineers/Managers/Business Managers/Managing Director etc. In a college, it could be
Lecturer/Professor/Head of the Department/Dean/Principal etc.

Following is a list of functionalities of the system: A person should be able to

 Login to the system through the first page of the application.


 Change the password after logging into the system.
 See his/her eligibility details (like how many days of leave he/she is eligible for etc).
 Query the leave balance.
 See his/her leave history since the time he/she joined the company/college.
 Apply for leave, specifying the form and to dates, reason for taking leave, address for
communication while on leave and his/her superior's email id.
 See his/her current leave applications and the leave applications that are submitted to
him/her for approval or cancellation.
 Approve/reject the leave applications that are submitted to him/her.
 Withdraw his/her leave application (which has not been approved yet).
 Cancel his/her leave (which has been already approved). This will need to be
approved by his/her Superior.
 Get help about the leave system on how to use the different features of the system.
 As soon as a leave application /cancellation request /withdrawal /approval
/rejection /password-change is made by the person, an automatic email should be sent
to the person and his superior giving details about the action.
 The number of days of leave (as per the assumed leave policy) should be
automatically credited to everybody and a notification regarding the same be sent to
them automatically for every academic year.
 An automatic leave-approval facility for leave applications which are older than 2
weeks should be there. Notification about the automatic leave approval should be
sent to the person as well as his superior.

RESOURCE :-

Problem Analysis and Project Planning

In the existing Leave Record Management System, every College/Department follows


manual procedure in which faculty enters information in a record book. At the end of
each month/session, Administration Department calculates leave/s of every member
which is a time taking process and there are chances of losing data or errors in the
records. This module is a single leave management system that is critical for HR tasks
and keeps the record of vital information regarding working hours and leaves. It
intelligently adapts to HR policy of the management and allows employees and their line
managers to manage leaves and replacements (if required). In this module, Head of
Department (HOD) will have permissions to look after data of every faculty member of
their department. HOD can approve leave through this application and can view leave
information of every individual. This application can be used in a college to reduce
processing work load. This project’s main idea is to develop an online centralized
application connected to database which will maintain faculty leaves, notices information
and their replacements (if needed). Leave management application will reduce
paperwork and maintain record in a more efficient & systematic way. This module will
also help to calculate the number of leaves taken monthly/annually and help gather data
with respect to number of hours’ worked, thereby helping in calculating the work hours
by the HR Department. Software Requirement Analysis In the existing paper work
related to leave management, leaves are maintained using the attendance register for
staff. The staff needs to submit their leaves manually to their 25 respective authorities.
This increases the paperwork & maintaining the records becomes tedious. Maintaining
notices in the records also increases the paperwork. The main objective of the proposed
system is to decrease the paperwork and help in easier record maintenance by having a
particular centralized Database System, where Leaves and Notices are maintained. The
proposed system automates the existing system. It decreases the paperwork and enables
easier record maintenance. It also reduces chances of Data loss. This module intelligently
adapts to HR policy of the management &allows employees and their line managers to
manage leaves and replacements for better scheduling of workload. The application
basically contains the given modules:

List of Module:

1) STAFF MODULE: It consist of two types of faculties a) Teaching b) Non-teaching

2) HOD MODULE: It consists of Head of the Department/Manager Body which takes


critical decision related to HR.

3) ADMINISTRATION MODULE: It calculates leaves & maintains records.

Objective:

 To automate the existing leave management in educational institutes.


 To decrease the paperwork and enable the process with efficient, reliable record
maintenance by using centralized database, thereby reducing chances of data loss.
 To provide for an automated leave management system that intelligently adapts to
HR policy of the organization and allows employees and their line managers to
manage leaves and replacements for better scheduling of work load & processes.

Functional Requirements:

 Login to the system through the first page of the application.


 Change the password after logging into the system.
 See his/her eligibility details (like how many days of leave he/she is eligible for etc).
 Query the leave balance.
 See his/her leave history since the time he/she joined the company/college.
 Apply for leave, specifying the form and to dates, reason for taking leave, and
address for communication while on leave and his/her superior's email id.
 See his/her current leave applications and the leave applications that are submitted to
him/her for approval or cancellation.
 Approve/reject the leave applications that are submitted to him/her.
 Withdraw his/her leave application (which has not been approved yet).
 Cancel his/her leave (which has been already approved). This will need to be
approved by his/her Superior.
 Get help about the leave system on how to use the different features of the system.
 As soon as a leave application /cancellation request /withdrawal /approval
/rejection /password-change is made by the person, an automatic email should be sent
to the person and his superior giving details about the action.
 The number of days of leave (as per the assumed leave policy) should be
automatically credited to everybody and a notification regarding the same be sent to
them automatically for every academic year.
 An automatic leave-approval facility for leave applications which are older than 2
weeks should be there. Notification about the automatic leave approval should be
sent to the person as well as his superior.

You might also like