CCS356 Object Oriented Software Engineering Lecture Notes 1
CCS356 Object Oriented Software Engineering Lecture Notes 1
com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
UNIT I
Software Characteristics
In both activities, high quality is achieved through good design, but the
manufacturing phase for hard ware can introduce quality problems that are nonexistent (or
easily corrected) for software. Software doesn’t “wear out."
Embedded software resides inread-only memory and is used to control products and
systems for the consumer and industrial markets.
SOFTWAREENGINEERING
In order to build software that is ready to meet the challenges and it must
recognize a few simple realities:
It follows that a concerted effort should be made to understand the problem before
as of ware solution is developed.
It follows that design becomes a pivotal activity.
It follows that software should exhibit high quality. It follows that software
should be maintainable.
These simple realities lead to one conclusion: software in all of its forms and a
cross allow fits application domains should be engineered.
Software engineering is the establishment and use of sound engineering principles
in order to obtain economically software that is reliable and works efficiently on
real machines.
Software engineering encompasses a process, methods form an aging and
engineering software, and tools.
www.EnggTree.com
THE SOFTWARE PROCESS
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.
An activity strives to achieve a broad objective and is applied regardless of the application
domain, size of the project, complexity of the effort, or degree of rigor with which software
engineering is to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product(e.g.,anarchitectural design model).
A task focuses on a small, but well-defined objective(e.g., conducting a unit test)that produces
tangible outcome.
A process framework establishes the foundation for a complete software engineering process
by identifying a small number of framework activities that are applicable to all software
projects, regardless of their size or complexity.
In addition, the process framework encompasses a set of umbrella activities that are applicable
across the entire software process. A generic process frame work for software engineering
encompasses five activities:
www.EnggTree.com
www.EnggTree.com
Figure1.7 IncrementalProcessModel
3.TheRAD Model
Rapid Application Development is a line sequential software development process
model that emphasizes an extremely short development cycle.
Rapid application achieved by using a component based construction approach.
If requirements are well understood and projects copies constrained the RAD
process enables a development team to create a―f ully functional system.
www.EnggTree.com
1.1.1.1 The
Prototyping Model:
The proto typing paradigm begins with communication. Developer and customer
meet and define the overall objectives for the software, identify whatever
requirements are known,
www.EnggTree.com
4.SpiralModel:
The spiral model is an evolutionary software process model that couples the
iterative nature of prototyping with the control led and systematic aspects of the
linear sequential model.
The spiral development model is a risk-driven process model generator that is used
to guide multi-stake holder concurrent engineering of software intensive systems.
That domain distinguishing features.
One is a cyclic approach for incrementally growing a system’s degree of definition
www.EnggTree.com
INTRODUCTION TO AGILITY:
Agility is the ability to respond quickly to changing needs. It encourage steam
structures and attitude sthat make effective communication among all
stakeholders.
It emphasizes rapid delivery of operational of tware and deemphasizes the
importance of intermediate work products.
It adopts the customer as a part of the development team.
It helps in organizing a teams ot habits in control of the work
performed.Yielding
Agility results in rapid, incremental delivery of software.
Agility and the Cost of Change:
The cost of change in software development increases nonlinearly as a
project progresses (Figure1.13, solidblackcurve).
It is relatively easy to accommodate a change when software team gathered its
requirements.
The costs of doing this work are minimal, and the time required will not
www.EnggTree.com
affect the outcome of the project.
Cost varies quickly, and the cost and time required ensuring that the change
is made without any side effects is non trivial.
An agile process reduces the cost of change because software is
released in increments and changes can be better controlled with in an
increment.
Agile process “flattens” the cost of change curve (Figure 1.11, shaded,
solid curve), allowing a software team to accommodate changes late in a
software project without dramatic cost and time impact.
When incremental delivery is coupled with other agile practices such as
continuous unit testing and pair programming, the cost of making a change
is attenuated.
TheseslidesaredesignedtoaccompanySoftwareEngineering:APractitioner’sApproach,7/e
3
4. AN AGILE PROCESS
An Agile Process is characterized in a manner that addresses a
www.EnggTree.com
number of key assumptions about the majority of software project:
1. It is difficult to predict which software requirements will persist and which will
change.
2. It is difficult to predict those customer priorities will change.
3. It is difficult to predict them much design is necessary before construction.
4. Analysis, design, construction, and testing are not as predictable.
AGILITY PRINCIPLES:
1. To satisfy the customer through early and continuous delivery of software.
2. Welcome changing requirements, even late in development.
3. Deliver working software frequently, from a couple of weeks to a couple of
months.
4. ‘Customers and developers must work together daily throughout the project.
5. Build projects around motivated individuals.
6. Emphasis on face-to-face communication.
7. Workings of tware are the primary measure of progress.
8. Agile processes promote sustainable development.
9. Continuous attention to technical excellence and good design enhances
agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. Self-organizing teams produce the best architectures/requirements/design.
12. The team reflects on how to become more effective at regular intervals.
7. The XP Process:
Extreme programming uses an object-oriented approach for software
development. There are four frame work activities involved in XP Process.
1. Planning
2. Designing
3. Coding
4. Testing
1. Planning:
Begins with the creation of a set of stories (also called use stories).Each story is
written by the customer and is placed on an in dexcard.The customer assigns a
value(i.e. apriority)to the story.
Agile team assesses each story and assigns a cost. Stories are grouped to for a
deliverable increment.
www.EnggTree.com
www.EnggTree.com
increment and/or the entire software release.
iii) The intent is to improve the IXP process.
Continuous learning:
i) Learning is a vital product of continuous process improvement; members of the
XP team are encouraged to learn new methods and techniques that can lead to a
higher quality product.
ii) In addition to theses ix new practices, IXP modifies a number of existing XP
practices.
Storyrivendevelopment(SDD):
Insists that stories for acceptance tests be written be for easing line of code is
developed.
EXTREME PROGRAMMING (XP):
The best known and a very influential agile method, Extreme
Programming(XP)takes an‘extreme’approach to iterative development.
New versions may be built several times per day;
Incrementsaredeliveredtocustomersevery2weeks;
Alltestsmustberunforeverybuildandthebuildisonlyacceptediftestsrunsuccessf
ully.
6.1 XP values:
XP is comprised of five values such as:
i. Communication www.EnggTree.com
ii. Simplicity
iii. Feedback
iv. Courage
v. Respect.
Each of these values is used as a driver for specific X P activities, actions, and task.
In order to achieve effective communication between software engineers and
other stake holders, XP emphasizes close, yet informal(verbal) collaboration
between customers and developers, the establishment of effective metaphors for
communicating important concepts, continuous feedback, and the avoidance of
volume in documentation as a communication medium.
To consider simplicity, XP restricts developers to design only for immediate
needs, rather than future needs.
Feedback is derived from three sources: the software, the customer and other
team members.
By designing and implementing an effective testing strategy, the software
provides the agile team with feedback.
The team develops a unit test for each class being developed, to exercise each
www.EnggTree.com
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
Unit -II
The Requirement Analysis and Specification phase starts after the feasibility
study stage is complete and the project is financially viable and technically
feasible. This phase ends when the requirements specification document has been
developed and reviewed. It is usually called the Software Requirements
www.EnggTree.com
Specification (SRS) Document.
These activities are usually carried out by a few experienced members of the
development team and it normally requires them to spend some time at the
customer site. The engineers who gather and analyse customer requirements and
then write the requirements specification document are known as system
analysts in the software industry. System analysts collect data about the product
to be developed and analyse the collected data to conceptualize what exactly
needs to be done. After understanding the precise user requirements, the analysts
analyse the requirements to weed out inconsistencies, anomalies and
incompleteness.
We can conceptually divide the requirements gathering and analysis activity into
two separate tasks:
• Requirements gathering
• Requirements analysis
REQUIREMENTS GATHERING:
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
The analyst usually studies all the available documents regarding the system to
be developed before visiting the customer site. Customers usually
provide a statement of purpose (SoP) document to the developers. Typically,
these documents might discuss issues such as the context in which the software is
required.
2. Interview:
Typically, there are many different categories of users of the software. Each
category of users typically requires a different set of features from the software.
Therefore, it is important for www.EnggTree.com
the analyst to first identify the different categories
of users and then determine the requirements of each.
3. Task analysis:
The users usually have a black-box view of software and consider the software
as something that provides a set of services. A service supported by the software
is also called a task. We can therefore say that the software performs various
tasks for the users. In this context, the analyst tries to identify and understand the
different tasks to be performed by the software. For each identified task, the
analyst tries to formulate the different steps necessary to realize the required
functionality in consultation with the users.
4. Scenario analysis:
A task can have many scenarios of operation. The different scenarios of a task
may take place when the task is invoked under different situations. For different
types of scenarios of a task, the behaviour of the software can be different.
5. Form analysis:
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
During requirements analysis, the analyst needs to identify and resolve three
main types of problems in the requirements
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
2. Completeness: The SRS is complete if, and only if, it includes the following
elements:
(3). Full labels and references to all figures, tables, and diagrams in the SRS and
www.EnggTree.com
definitions ofall terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in
the SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,one
requirement as tabular but in another as textual.
Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is
a method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.
Ranking for importance and stability: The SRS is ranked for importance and
stability if each requirement in it has an identifier to indicate either the significance or
stability of that particular requirement.
Typically, all requirements are not equally important. Some prerequisites may
be essential, especially for life-critical applications, while others may be desirable.
[Type text] Downloaded from EnggTree.com
[Type text] [Type text]
lOMoARcPSD|239 778 25
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
Each element should be identified to make these differences clear and explicit.
Another way to rank requirements is to distinguish classes of items as essential,
conditional, and optional.
Verifiability: SRS is correct when the specified requirements can be verified with a
cost- effective system to check whether the final software meets those requirements.
The requirements are verified with the help of reviews.
Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or
enhancement documentation.
www.EnggTree.com
Design Independence: There should be an option to select from multiple design
alternatives for the final system. More specifically, the SRS should not contain any
implementation details.
The right level of abstraction: If the SRS is written for the requirements stage, the
details should be explained explicitly. Whereas,for a feasibility study, fewer analysis
can be used. Hence, the level of abstraction modifies according to the objective of the
SRS.
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
• Anomaly
• Inconsistency
• Incompleteness
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
www.EnggTree.com
Formal Methods:
Formal methods are mathematical techniques used to specify, model, and reason
about software systems. They provide a rigorous and precise approach to system
specification, ensuring clarity and unambiguous representation. Common formal
methods used for system specification include Z notation, B method, and
www.EnggTree.com
Specification and Description Language (SDL).
Mathematical Notations:
Language Constructs:
Requirements Capture:
Refinement:
Transitions are triggered by the incidents or input events fed to the state
machine. An FSM is an event-driven reactive system.
• A state machine also produces an output. The output produced depends on the
current state of the state machine sometimes, and sometimes it also depends
on the input events fed to the state machine.
Benefits of using state machines:
• It is used to describe situations or scenarios of your application (Modelling the
lifecycle of a reactive object through interconnections of states.
www.EnggTree.com
1. Mealy machine
2. Moore machine
3. Harel state charts
4. UML state machines
Some of these state machines are used for software engineering, and some state
machines are still being used in digital electronics, VLSI design, etc.
Mealy machines are a type of simple state machine where the next state is
determined by the current state and the given input.
The next state is determined by checking which input generates the next state
with the current state. Imagine a button that only works when you’re logged in.
ANUJA.R Downloaded from
AP/CSEEnggTree.com RCET
EnggTree.com
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING
The new state is determined by logging in. Once you’re logged in, the next state
is determined by the current state which is logged in.
Moore Machines
A Moore State Machine is a type of state machine for modeling systems with a
high degree of uncertainty. In these systems, predicting the exact sequence of
future events is difficult. As a result, it is difficult to determine the next event.
A Moore model captures this uncertainty by allowing the system to move from
one state to another based on the outcome of a random event.
A Moore model has many applications in both industry and academia. For
example, it can be used to predict when a system will fail or when certain events
will occur with a high probability (e.g., when there will be an earthquake).
www.EnggTree.com
It can also be used as part of an optimization algorithm when dealing with
uncertain inputs (e.g., produce only 1% more product than standard).
In addition, Moore models are often used as rules for automatic control systems (e.g.,
medical equipment) that need to respond quickly and accurately without human intervention.
Turing Machine
The Turing Machine consists of an input tape (with symbols on it), an internal
tape (which corresponds to memory), and an output tape (which contains the
result).
A Turing Machine operates through a series of steps: it scans its input tape, reads
out one symbol at a time from its internal tape, and then applies this symbol
as a command (or decision) to its output tape. For example: “If you see ‘X’ on
the input tape, then print ‘Y’ on the output tape.”
The input tape can be considered a finite set of symbols, while the internal and
output tapes are infinite. The Turing Machine must read an entire symbol from its
internal tape before it can move its head to the next symbol on the input tape.
Once it has moved its head to the next symbol, it can read that symbol out of its
internal tape and then move to the next symbol on its input tape.
This process continues until no more input or output symbols are left in the
Turing Machine’s internal or external tapes (at which point, it stops).
www.EnggTree.com
State machines are helpful for a variety of purposes. They can be used to model the flow
of logic within a program, represent the states of a system, or for modeling the flow of
events in a business process.
There are many different types of state machines, ranging from simple to highly complex.
A few common use cases include:
State machines are ideal for modeling business workflows. This includes account setup
flows, order completion, or a hiring process.
These things have a beginning and an end and follow some sort of sequential order. State
machines are also great for modeling tasks that involve conditional logic.
www.EnggTree.com
Business Decision-Making
Companies pair FSMs with their data strategy to explore the cause and effect of business
scenarios to make informed business decisions.
Business scenarios are often complex and unpredictable. There are many possible
outcomes, and each one impacts the business differently.
A simulation allows you to try different business scenarios and see how each plays out.
You can then assess the risk and determine the best course of action.
www.EnggTree.com
4. PETRI NETS
Petri nets are a mathematical modeling tool used in software engineering to analyze
and describe the behavior of concurrent systems. They were introduced by Carl
Adam Petri in the 1960s and have since been widely used in various fields,
including software engineering.
Petri nets provide a graphical representation of a system's state and the transitions
between those states. They consist of places, transitions, and arcs. Places represent
states, transitions represent events or actions, and arcs represent the flow of tokens
(also called markings) between places and transitions.
In software engineering, Petri nets can be used for various purposes, including:
System Modeling: Petri nets are used to model the behavior of complex software
systems, especially concurrent and distributed systems. They help in understanding
www.EnggTree.com
and visualizing how different components of the system interact and how their states
change over time.
Specification and Verification: Petri nets can be used to specify the desired
behavior of a software system. By modeling the system using Petri nets, it becomes
possible to formally verify properties such as safety, liveness, reachability, and
deadlock-freeness. Verification techniques based on Petri nets help in identifying
design flaws, potential deadlocks, or other issues early in the development process.
Workflow Modeling: Petri nets are often used to model business processes and
workflows. In software engineering, this is particularly useful for modeling the flow
of tasks and activities in software development processes, such as agile
methodologies or continuous integration/continuous deployment (CI/CD) pipelines.
Downloaded from EnggTree.com
EnggTree.com
CCS356-OBJECT OREINTED SOFTWARE ENGINEERING
Software Testing: Petri nets can be utilized in software testing to generate test cases
and verify the correctness of the system. By modeling the system's behavior and
different scenarios, it becomes possible to systematically generate test cases that
cover various paths and states, helping in identifying potential bugs and ensuring the
software's reliability.
Characteristics of UML
Downloaded from EnggTree.com
EnggTree.com
CCS356-OBJECT OREINTED SOFTWARE ENGINEERING
• It is a modeling language that has been generalized for various use cases.
• It is not a programming language; instead, it is a graphical modeling language
that uses diagrams that can be understood by non-programmers as well.
• It has a close connection to object-oriented analysis and design.
• It is used to visualize the system's workflow.
Conceptual Modeling
Before moving ahead with the concept of UML, we should first understand the
basics of the conceptual model.
o Object: An object is a real world entity. There are many objects present
within a single system. It is a fundamental building block of UML.
Class: A class is a software blueprint for objects, which means that it defines the
variables and methods common to all the objects of a particular type.
The main purpose of a use case diagram is to portray the dynamic aspect of a
system. It accumulates the system's requirement, which includes both internal as
well as external influences. It invokes persons, use cases, and several things that
invoke the actors and elements accountable for the implementation of use case
diagrams. It represents how an entity from the external environment can interact
with a part of the system.
3. It recognizes the internal as well as external factors that influence the system.
A use case diagram depicting the Online Shopping website is given below.
Here the Web Customer actor makes use of any online shopping website to
purchase online. The top-level uses are as follows; View Items, Make
Purchase, Checkout, Client Register. The View Items use case is utilized by
the customer who searches and view products. The Client Register use case
allows the customer to register itself with the website for availing gift vouchers,
coupons, or getting a private sale invitation. It is to be noted that the Checkout is
an included use case, which is part of Making Purchase, and it is not available
by itself.
o The View Items is further extended by several use cases such as; Search
Similarly, the Checkout use case also includes the following use cases, as shown below. It
requires an authenticated Web Customer, which can be done by login page, user
authentication
o The Checkout use case involves Payment use case that can be done either
by the credit card and external credit payment services or with PayPal.
www.EnggTree.com
Following are some important tips that are to be kept in mind while drawing a
use case diagram:
2. A use case diagram should represent the most significant interaction among
4. If the use case diagram is large and more complex, then it should be
1. Class diagram
The class diagram depicts a static view of an application. It represents the types
www.EnggTree.com
of objects residing in the system and the relationships between them. A class
consists of its objects, and also it may inherit from other classes. A class diagram
is used to visualize, describe, document various different aspects of the system,
and also construct executable software code.
to be programmed.
5. It is helpful for the stakeholders and the developers.
Vital components of a Class Diagram
1. The attributes are written along with its visibility factors, which are public
Relationships
more classes where a change in one class cause changes in another class. It
forms a weaker relationship.
In the following example, Student_Name is dependent on the Student_Id.
www.EnggTree.com
24
www.EnggTree.com
2. INTERACTION DIAGRAM
As the name suggests, the interaction diagram portrays the interactions between
distinct entities present in the model. It amalgamates both the activity and
sequence diagrams. The communication is nothing but units of the behaviour of a
A set of messages that are interchanged between the entities to achieve certain
specified tasks in the system is termed as interaction. It may incorporate any
feature of the classifier of which it has access. In the interaction diagram, the
critical component is the messages and the lifeline.
In UML, the interaction overview diagram initiates the interaction between the
objects utilizing message passing. While drawing an interaction diagram, the
entire focus is to represent the relationship among different objects which are
available within the system boundary and the message exchanged by them to
communicate with each other.
The sequence diagram envisions the order of the flow of messages inside the
system by depicting the communication between two lifelines, just like a time-
ordered sequence of events.
Before drawing an interaction diagram, the first step is to discover the scenario
for which the diagram will be made. Next, we will identify various lifelines that
will be invoked in the communication, and then we will classify each lifeline.
After that, the connections are investigated and how the lifelines are interrelated
to each other.
4. Several distinct messages that depict the interactions in a precise and clear way.
www.EnggTree.com
3. The interaction diagram represents the interactive (dynamic) behaviour of the
system.
4. The sequence diagram portrays the order of control flow from one element
to the other elements inside the system, whereas the collaboration diagrams
are employed to get an overview of the object architecture of the system.
5. The interaction diagram models the system as a time-ordered sequence of a
system.
6. The interaction diagram models the system as a time-ordered sequence of a
system.
7. The interaction diagram systemizes the structure of the interactive elements.
3. ACTIVITY DIAGRAM
In UML, the activity diagram is used to demonstrate the flow of control within
the system rather than the implementation. It models the concurrent and
sequential activities.
The activity diagram helps in envisioning the workflow from one activity to
another. It put emphasis on the condition of flow and the order in which it occurs.
Activities
The control flow of activity is represented by control nodes and object nodes that
illustrates the objects used within an activity. The activities are initiated at the
initial node and are terminated at the final node.
Here the input parameter is the Requested order, and once the order is accepted,
all of the required information is then filled, payment is also accepted, and then
the order is shipped. It permits order shipment before an invoice is sent or
payment is completed.
An activity diagram can be used to portray business processes and workflows. Also, it
used for modeling business as well as the software. An activity diagram is utilized for
the followings:
www.EnggTree.com
1. To graphically model the workflow in an easier and understandable way.
2. To model the execution flow among several activities.
www.EnggTree.com
A state diagram is used to represent the condition of the system or part of the
system at finite instances of time. It’s a behavioral diagram and it represents
the behavior using finite state transitions. State diagrams are also referred to as
State machines and State-chart Diagrams. These terms are often used
interchangeably. So simply, a state diagram is used to model the dynamic
behavior of a class in response to time and changing external stimuli. We can
say that each and every class has a state but we don’t model every class using
State diagrams. We prefer to model the states with three or more states.
Uses of state chart diagram –
• We use it to state the events responsible for change in state (we do not
show what processes cause those events).
• We use it to model the dynamic behavior of the system .
• www.EnggTree.com
To understand the reaction of objects/classes to internal or
external stimuli. Firstly let us understand what are Behavior
diagrams?
There are two types of diagrams in UML :
1. Structure Diagrams – Used to model the static structure of a system,
for example- class diagram, package diagram, object diagram, deployment
diagram etc.
2. Behavior diagram – Used to model the dynamic change in the
system over time. They are used to model and construct the functionality
of a system. So, a behavior diagram simply guides us through the
functionality of the system using Use case diagrams, Interaction
diagrams, Activity diagrams and State diagrams.
1. Initial state – We use a black filled circle represent the initial state of a
System or a class.
Figure – transition
www.EnggTree.com
6. Self transition – We use a solid arrow pointing back to the state itself
to represent a self transition. There might be scenarios when the state of the
object does not change upon the occurrence of an event. We use self
transitions to represent such cases.
The UMl diagrams we draw depend on the system we aim to represent. Here is
www.EnggTree.com
just an example of how an online ordering system might look like :
Data Flows
Data flow represents the flow of data between two processes. It could be
between an actor and a process, or between a data store and a process. A data
flow denotes the value of a data item at some point of the computation. This value
is not changed by the data flow.
Representation in DFD − A data flow is represented by a directed arc or an
arrow, labelled with the name of the data item that it carries.
In the above figure, Integer_a and Integer_b represent the input data flows to the
process, while
L.C.M. and H.C.F. are the output data flows.
A data flow may be forked in the following cases −
• The output value is sent to several places as shown in the following figure.
Here, the output arrowswww.EnggTree.com
are unlabelled as they denote the same value.
• The data flow contains an aggregate value, and each of the components is
sent to different places as shown in the following figure. Here, each of the
forked components is labelled.
Actors
Actors are the active objects that interact with the system by either producing
data and inputting them to the system, or consuming data produced by the
system. In other words, actors serve as the sources and the sinks of data.
Representation in DFD − An actor is represented by a rectangle. Actors are
connected to the inputs and outputs and lie on the boundary of the DFD.
Example − The following figure shows the actors, namely, Customer and
Sales_Clerk in a counter sales system.
www.EnggTree.com
Data Stores
Data stores are the passive objects that act as a repository of data. Unlike actors, they
cannot perform any operations. They are used to store data and retrieve the stored data.
They represent a data structure, a disk file, or a table in a database.
Representation in DFD − A data store is represented by two parallel lines containing
the name of the data store. Each data store is connected to at least one process. Input
arrows contain information to modify the contents of the data store, while output arrows
contain information retrieved fromwww.EnggTree.com
the data store. When a part of the information is to be
retrieved, the output arrow is labelled. An unlabelled arrow denotes full data retrieval. A
two-way arrow implies both retrieval and update.
Example − The following figure shows a data store, Sales_Record, that stores the details
of all sales. Input to the data store comprises of details of sales such as item, billing
amount, date, etc. To find the average sales, the process retrieves the sales records and
computes the average.
Constraints
Constraints specify the conditions or restrictions that need to be satisfied over time. They
allow adding new rules or modifying existing ones. Constraints can appear in all the three
Advantages Disadvantages
DFDs depict the boundaries of a system and DFDs take a long time to create, which may not
hence are helpful in portraying the relationship be feasible for practical purposes.
between the external objects and the processes
within the system.
They help the users to have a knowledge about DFDs do not provide any information about the
the system. www.EnggTree.com
time-dependent behavior, i.e., they do not
specify when the transformations are done.
The graphical representation serves as a They do not throw any light on the frequency of
blueprint for the programmers to develop a computations or the reasons for computations.
system.
DFDs provide detailed information about the The preparation of DFDs is a complex process
system processes. that needs considerable expertise. Also, it is
difficult for a non-technical person to
understand.
They are used as a part of the system The method of preparation is subjective and
documentation. leaves ample scope to be imprecise.
The Object Model, the Dynamic Model, and the Functional Model are complementary to
each other for a complete Object-Oriented Analysis.
• Object modelling develops the static structure of the software system in terms of
objects. Thus it shows the “doers” of a system.
• Dynamic Modelling develops the temporal behavior of the objects in response to
external events. It shows the sequences of operations performed on the objects.
• Functional model gives an overview of what the system should do.
Functional Model and Object Model
The four main parts of a Functional Model in terms of object model are −
Process − Processes imply the methods of the objects that need to be implemented.
•
Actors − Actors are the objects in the object model.
•
Data Stores − These are either objects in the object model or attributes of objects.
•
Data Flows − Data flows to or from actors represent operations on or by objects.
•
Data flows to or from data stores represent queries or updates.
Functional Model and Dynamic Model
The dynamic model states when www.EnggTree.com
the operations are performed, while the functional
model states how they are performed and which arguments are needed. As actors are
active objects, the dynamic model has to specify when it acts. The data stores are passive
objects and they only respond to updates and queries; therefore, the dynamic model
need not specify when they act.
Object Model and Dynamic Model
The dynamic model shows the status of the objects and the operations performed on the
occurrences of events and the subsequent changes in states. The state of the object as a
result of the changes is shown in the object model.
www.EnggTree.com
UNIT III
SOFTWARE DESIGN
Software design – Design process – Design concepts – Coupling – Cohesion –
Functional independence – Design patterns – Model-view-controller – Publish-
subscribe – Adapter – Command – Strategy – Observer – Proxy – Facade –
Architectural styles – Layered - Client Server - Tiered - Pipe and filter- User
interface design-Case Study.
1. Software design:
The design phase of software development deals with transforming the customer requirements as
described in the SRS documents into a form implementable using a programming language. The
software design process can be divided into the following three levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Elements of a System
1. Architecture: This is the conceptual model that defines the structure, behavior, and views of a
www.EnggTree.com
system. We can use flowcharts to represent and illustrate the architecture.
2. Modules: These are components that handle one specific task in a system. A combination of
the modules makes up the system.
3. Components: This provides a particular function or group of related functions. They are made
up of modules.
4. Interfaces: This is the shared boundary across which the components of a system exchange
information and relate.
5. Data: This is the management of the information and data flow.
The design phase of software development deals with transforming the customer requirements as
described in the SRS documents into a form implementable using a programming language. The
software design process can be divided into the following three levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Elements of a System
1. Architecture: This is the conceptual model that defines the structure, behavior, and views of a
system. We can use flowcharts to represent and illustrate the architecture.
2. Modules: These are components that handle one specific task in a system. A combination of
the modules makes up the system.
3. Components: This provides a particular function or group of related functions. They are made
up of modules.
4. Interfaces: This is the shared boundary across which the components of a system exchange
information and relate.
5. Data: This is the management of the information and data flow.
www.EnggTree.com
Interface Design
Interface design is the specification of the interaction between a system and its environment. This
phase proceeds at a high level of abstraction with respect to the inner workings of the system i.e,
during interface design, the internal of the systems are completely ignored, and the system is
treated as a black box. Attention is focused on the dialogue between the target system and the
users, devices, and other systems with which it interacts. The design problem statement
produced during the problem analysis step should identify the people, other systems, and devices
which are collectively called agents.
Interface design should include the following details:
1. Precise description of events in the environment, or messages from agents to which the
system must respond.
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going out of the system.
4. Specification of the ordering and timing relationships between incoming events or messages,
and outgoing events or outputs.
Architectural Design
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions between them. In
architectural design, the overall structure of the system is chosen, but the internal details of major
components are ignored. Issues in architectural design includes:
www.EnggTree.com
2. Software Design Principles
Software design principles are concerned with providing means to handle the complexity of the design
process effectively. Effectively managing the complexity will not only reduce the effort needed for design
but can also reduce the scope of introducing errors during design.
Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem, divide the
www.EnggTree.com
problems and conquer the problem it means to divide the problem into smaller pieces so that each piece
can be captured separately.
For software design, the goal is to divide the problem into manageable pieces.
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element as
well as the component being designed.
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis for Object
Oriented design approaches.
Modularity
www.EnggTree.com
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only property
that allows a program to be intellectually manageable. Single large programs are difficult to understand and
read due to a large number of reference variables, control paths, global variables, etc.
o ach module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.
Advantages of Modularity
Disadvantages of Modularity
3. Design concepts
1. Abstraction
A solution is stated in large terms using the language of the problem environment at the highest
level abstraction.
The lower level of abstraction provides a more detail description of the solution.
A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
A collection of data that describes a data object is a data abstraction.
2. Architecture
The complete structure of the software is known as software architecture.
Structure provides conceptual integrity for a system in a number of ways.
The architecture is the structure of program modules where they interact with each other in a
specialized way.
The components use the structure ofwww.EnggTree.com
data.
The aim of the software design is to obtain an architectural framework of a system.
The more detailed design activities are conducted from the framework.
3. Patterns
A design pattern describes a design structure and that structure solves a particular design
problem in a specified content.
4. Modularity
A software is separately divided into name and addressable components. Sometime they are
called as modules which integrate to satisfy the problem requirements.
Modularity is the single attribute of a software that permits a program to be managed easily.
5. Information hiding
Modules must be specified and designed so that the information like algorithm and data presented
in a module is not accessible for other modules not requiring that information.
6. Functional independence
The functional independence is the concept of separation and related to the concept of
modularity, abstraction and information hiding.
The functional independence is accessed using two criteria i.e Cohesion and coupling.
Cohesion
Cohesion is an extension of the information hiding concept.
A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.
7. Refinement
Refinement is a top-down design approach.
It is a process of elaboration.
A program is established for refining levels of procedural details.
A hierarchy is established by decomposing a statement of function in a stepwise manner till the
programming language statement are reached.
8. Refactoring www.EnggTree.com
It is a reorganization technique which simplifies the design of components without changing its
function behaviour.
Refactoring is the process of changing the software system in a way that it does not change the
external behaviour of the code still improves its internal structure.
9. Design classes
The model of software is defined as a set of design classes.
Every class describes the elements of problem domain and that focus on features of the problem
which are user visible.
OO design concept in Software Engineering
Software design model elements
www.EnggTree.com
A good design is the one that has low coupling. Coupling is measured by the number of relations between
the modules. That is, the coupling increases as the number of calls between modules increase or the amount
of shared data is large. Thus, it can be said that a design with high coupling will have more errors.
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data items such
as structure, objects, etc. When the module passes non-global data structure or entire structure to another
module, they are said to be stamp coupled. For example, passing structure variable in C or object in C++
language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is used to direct
the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed data format,
communication protocols, or device interface. This is related to communication to external tools and devices.
www.EnggTree.com
6. Common Coupling: Two modules are common coupled if they share information through some global
data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a branch from
one module into another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module belong
together. Thus, cohesion measures the strength of relationships between pieces of functionality within a
given module. For example, in highly cohesive systems, functionality is strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low cohesion."
www.EnggTree.com
www.EnggTree.com
1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module,
cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a module
form the components of the sequence, where the output from one component of the sequence is
input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of the
module refer to or update the same data structure, e.g., the set of functions defined on an array or a
stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the module
are all parts of a procedure in which particular sequence of steps has to be carried out for achieving a
goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact that all the
methods must be executed in the same time, the module is said to exhibit temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module perform
a similar operation. For example Error handling, data input and data output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of tasks
that are associated with each other very loosely, if at all.
Coupling Cohesion
Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.
In coupling, modules are linked to the In cohesion, the module focuses on a single thing.
other modules.
Suppose you and your friends are asked to work on a calculator project as a team.
Here we need to develop each calculator functionality in form of modules taking two
user inputs. www.EnggTree.com
Before you enter into development phase, you and your team needs to make sure to
design the project in such a way that each of the module that you develop individually
should be able to perform its assigned task without requiring much or no interaction
with your friends module.
What I intend to say is, if you are working on addition Module then your module
should be able to independently perform addition operation on receiving user input. It
should not require to make any interaction with other modules like subtraction Module,
multiplication Module or etc.
Module reusability
A functionally independent module performs some well defined and specific task. So it
becomes easy to reuse such modules in different program requiring same
functionality.
Understandability
A functionally independent module is less complex so easy to understand. Since such
modules are less interaction with other modules so can be understood in isolation.
www.EnggTree.com
Design Patterns
These design patterns are all about class instantiation. This pattern can be
further divided into class-creation patterns and object-creational patterns. While
www.EnggTree.com
AbstractFactory
Creates an instance of several families of classes
Builder
Separates object construction from its representation
FactoryMethod
Creates an instance of several derived classes
ObjectPool
Avoid expensive acquisition and release of resources by recycling objects
that are no longer in use
Prototype
A fully initialized instance to be copied or cloned
Singleton
A class of which only a single instance can exist
These design patterns are all about Class and Object composition. Structural
class-creation patterns use inheritance to compose interfaces. Structural object-
patterns define ways to compose objects to obtain new functionality
www.EnggTree.com
ty.
Adapter
Match interfaces of different classes
Bridge
Separates an object’s interface from its implementation
Composite
A tree structure of simple and composite objects
Decorator
Add responsibilities to objects dynamically
Facade
A single class that represents an entire subsystem
Flyweight
A fine-grained instance used for efficient sharing
PrivateClassData
Restricts accessor/mutator access
Proxy
An object representing another object
Behavioral design patterns
These design patterns are all about Class's objects communication. Behavioral
patterns are those patterns that are most specifically concerned with
communication between objects.www.EnggTree.com
www.EnggTree.com
Chainofresponsibility
A way of passing a request between a chain of objects
Command
Encapsulate a command request as an object
Interpreter
A way to include language elements in a program
Iterator
Sequentially access the elements of a collection
Mediator
Defines simplified communication between classes
Memento
Capture and restore an object's internal state
NullObject
Designed to act as a default value of an object
Observer
A way of notifying change to a number of classes
State
Alter an object's behavior when its state changes
Strategy www.EnggTree.com
Encapsulates an algorithm inside a class
Templatemethod
Defer the exact steps of an algorithm to a subclass
Visitor
Defines a new operation to a class without change
Types of Design Patterns
There are three types of Design Patterns,
Creational Design Pattern
Structural Design Pattern
Behavioral Design Pattern
Creational Design Pattern
Creational Design Pattern abstract the instantiation process. They help in making a system
independent of how its objects are created, composed and represented.
Importance of Creational Design Patterns:
A class creational Pattern uses inheritance to vary the class that’s instantiated, whereas an
object creational pattern will delegate instantiation to another object.
Creational patterns become important as systems evolve to depend more on object
composition than class inheritance. As that happens, emphasis shifts away from hardcoding a
fixed set of behaviors toward defining a smaller set of fundamental behaviors that can be
composed into any number of more complex ones.
Creating objects with particular behaviors requires more than simply instantiating a class.
When to ue Creational Design Patterns
Complex Object Creation: Use creational patterns when the process of creating an object is
complex, involving multiple steps, or requires the configuration of various parameters.
Promoting Reusability: Creational patterns promote object creation in a way that can be
reused across different parts of the code or even in different projects, enhancing modularity
and maintainability.
Reducing Coupling: Creational patterns can help reduce the coupling between client code
and the classes being instantiated, making the system more flexible and adaptable to
changes.
Singleton Requirements: Use the Singleton pattern when exactly one instance of a class is
needed, providing a global point of access to that instance.
Step-by-Step Construction: Builder pattern of creational design patterns is suitable when
you need to construct a complex object step by step, allowing for the creation of different
representations of the same object.
Advantages of Creational Design Patterns
Flexibility and Adaptability: Creational patterns make it easier to introduce new types of
objects or change the way objects are created without modifying existing client code. This
enhances the system’s flexibility and adaptability to change.
www.EnggTree.com
Reusability: By providing a standardized way to create objects, creational patterns promote
code reuse across different parts of the application or even in different projects. This leads to
more maintainable and scalable software.
Centralized Control: Creational patterns, such as Singleton and Factory patterns, allow for
centralized control over the instantiation process. This can be advantageous in managing
resources, enforcing constraints, or ensuring a single point of access.
Scalability: With creational patterns, it’s easier to scale and extend a system by adding new
types of objects or introducing variations without causing major disruptions to the existing
codebase.
Promotion of Good Design Practices: Creational patterns often encourage adherence to
good design principles such as abstraction, encapsulation, and the separation of concerns.
This leads to cleaner, more maintainable code.
Disadvantages of Creational Design Patterns
Increased Complexity: Introducing creational patterns can sometimes lead to increased
complexity in the codebase, especially when dealing with a large number of classes,
interfaces, and relationships.
Overhead: Using certain creational patterns, such as the Abstract Factory or Prototype
pattern, may introduce overhead due to the creation of a large number of classes and
interfaces.
Dependency on Patterns: Over-reliance on creational patterns can make the codebase
dependent on a specific pattern, making it challenging to adapt to changes or switch to
alternative solutions.
Readability and Understanding: The use of certain creational patterns might make the code
less readable and harder to understand, especially for developers who are not familiar with the
specific pattern being employed.
Structural Design Patterns
Structural patterns are concerned with how classes and objects are composed to form larger
structures. Structural class patterns use inheritance to compose interfaces or implementations.
Importance of Structural Design Patterns
This pattern is particularly useful for making independently developed class libraries work
together.
Structural object patterns describe ways to compose objects to realize new functionality.
It added flexibility of object composition comesfrom the ability to change the composition at
run-time, which is impossible with static class composition.
When to ue Structural Design Patterns
Adapting to Interfaces: Use structural patterns like the Adapter pattern when you need to
make existing classes work with others without modifying their source code. This is particularly
useful when integrating with third-party libraries or legacy code.
Organizing Object Relationships: Structural patterns such as the Decorator pattern are
useful when you need to add new functionalities to objects by composing them in a flexible
and reusable way, avoiding the need for subclassing.
Simplifying Complex Systems: When dealing with complex systems, structural patterns like
the Facade pattern can be used to provide a simplified and unified interface to a set of
interfaces in a subsystem. www.EnggTree.com
Managing Object Lifecycle: The Proxy pattern is helpful when you need to control access to
an object, either for security purposes, to delay object creation, or to manage the object’s
lifecycle.
Hierarchical Class Structures: The Composite pattern is suitable when dealing with
hierarchical class structures where clients need to treat individual objects and compositions of
objects uniformly.
Advantages of Structural Design Patterns
Flexibility and Adaptability: Structural patterns enhance flexibility by allowing objects to be
composed in various ways. This makes it easier to adapt to changing requirements without
modifying existing code.
Code Reusability: These patterns promote code reuse by providing a standardized way to
compose objects. Components can be reused in different contexts, reducing redundancy and
improving maintainability.
Improved Scalability: As systems grow in complexity, structural patterns provide a scalable
way to organize and manage the relationships between classes and objects. This supports the
growth of the system without causing a significant increase in complexity.
Simplified Integration: Structural patterns, such as the Adapter pattern, facilitate the
integration of existing components or third-party libraries by providing a standardized interface.
This makes it easier to incorporate new functionalities into an existing system.
Model-view-controller (MVC)
Definition
Model
This component in the architecture will represent all data-related logic. This includes
defining how the data is formed. In other words, this holds the definition for many of
the types that we use in the application. In many cases, the model here refers to the
www.EnggTree.com
type of data that we are dealing with in the application. This component also notifies
its dependents about data changes.
View
Contains all User interface (UI) logic in the application. This component of the
application encapsulates mainly the UI related logic which includes things that the end
user will manipulate like dropdown buttons and web pages etc.
Controller
Controllers exist as a layer between Model and View components to process all the
business logic arising from user input. It is responsible to handle inputs from the View
components, manipulate data using the models from the Model component and then
finally interact with the view components again to render the final output to the end
user. Responsible for manipulating the data.
www.EnggTree.com
Why MVC
The MVC pattern today is widely used for many applications and remain a popular
choice. This is due to a few key reasons
Given the separation of the applications into the three distinct areas, this means that
more developers are able to work on each part separately. e.g if a developer works on
the model, he is not directly blocking another developer from building up the view
component of the application and thus allows teams to speed up development purely
due to the nature of the architecture itself.
Greater Testability
Each component being separated from each other means that developers are able to
test each one separately and in isolation. This is made easier due to the clear
separation of concerns that is applied in this architecture pattern. e.g a Model can be
www.EnggTree.com
tested easily without the view component.
Any changes in view component will usually not affect the model component, hence
developers using this pattern can easily extend and add new views to the application to
display the data from model in different ways. Thus, modifications are easier to be
isolated to a single component instead of affecting the entire application
Web applications
This framework provides developers with a MVC abstraction built on top of ASP.NEXT
and thus provide a large set of added functionality.
Sample code from ASP.NET documentation
Another example framework for web that uses MVC is Sails. Sails is a nodeJs
framework that provides added functionality. A sails app comes preconfigured with the
MVC structure predefined and developers can just use it right out of the box.
www.EnggTree.com
What is an Adapter?
www.EnggTree.com
If I managed to confuse you, no worries, I give a more concrete example further below.
🙂
The adapters on the left side, representing the UI, are called the Primary or Driving
Adapters because they are the ones to start some action on the application, while the
adapters on the right side, representing the connections to the backend tools, are
called the Secondary or Driven Adapters because they always react to an action of
a primary adapter.
On the left side, the adapter depends on the port and gets injected a concrete
implementation of the port, which contains the use case. On this side, both the
port and its concrete implementation (the use case) belong inside the
application;
On the right side, the adapter is the concrete implementation of the port and is
injected in our business logic although our business logic only knows about the
interface. On this side, the port belongs inside the application, but its
concrete implementation belongs outside and it wraps around some external
tool.
www.EnggTree.com
Using this port/adapter design, with our application in the centre of the system, allows
us to keep the application isolated from the implementation details like ephemeral
technologies, tools and delivery mechanism
COMMAND:
The command pattern is a behavioral design pattern in which an object is used
to encapsulate all information needed to perform an action or trigger an event at a later time.
This information includes the method name, the object that owns the method and values for
the method parameters.
The invoker does not know anything about a concrete command, it knows only about
the command interface. Invoker object(s), command objects and receiver objects are held
by a client object, the client decides which receiver objects it assigns to the command
objects, and which commands it assigns to the invoker.
Using command objects makes it easier to construct general components that need to
delegate, sequence or execute method calls at a time of their choosing without the need to
know the class of the method or the method parameters.
The design starts with the lowest level components and subsystems. By using these components,
the next immediate higher-level components and subsystems are created or composed. The
process is continued till all the components and subsystems are composed into a single
component, which is considered as the complete system. The amount of abstraction grows high
as the design moves to more high levels.
By using the basic information existing system, when a new system needs to be created, the
bottom-up strategy suits the purpose.
Bottom-up approach
www.EnggTree.com
Advantages of Bottom-up approach:
The economics can result when general solutions can be reused.
It can be used to hide the low-level details of implementation and be merged with the top-
down technique.
Disadvantages of Bottom-up approach:
It is not so closely related to the structure of the problem.
High-quality bottom-up solutions are very hard to construct.
It leads to the proliferation of ‘potentially useful’ functions rather than the most appropriate
ones.
Top-down approach:
Each system is divided into several subsystems and components. Each of the subsystems is
further divided into a set of subsystems and components. This process of division facilitates
forming a system hierarchy structure. The complete software system is considered a single entity
and in relation to the characteristics, the system is split into sub-systems and components. The
same is done with each of the sub-systems.
This process is continued until the lowest level of the system is reached. The design is started
initially by defining the system as a whole and then keeps on adding definitions of the subsystems
and components. When all the definitions are combined, it turns out to be a complete system.
For the solutions of the software that need to be developed from the ground level, a top-down
design best suits the purpose.
Top-down approach
PUBLISH –SUBSCRIBE
The publisher-subscriber (pub-sub) model is a widely used architectural pattern. We can
use it in software development to enable communication between different components in a
system.
In particular, it is often used in distributed systems, where different parts of the system need to
interact with each other but don’t want to be tightly coupled.
In this tutorial, we’ll explore the pub-sub model, how it works, and some common use
cases for this architectural pattern.
www.EnggTree.com
2. Pub-Sub Model: Overview
The pub-sub model involves publishers and subscribers, making it a messaging pattern.
Specifically, the publishers are responsible for sending messages to the system, while
subscribers are responsible for receiving those messages.
Mainly, the pub-sub model is based on decoupling components in a system, which
means that components can interact without being tightly coupled.
In this section, we’ll discuss how this model works, including sending messages,
checking for subscribers, receiving messages, registering for topics, decoupling
publishers and subscribers, and additional features the message broker implements to
enhance message delivery.
However, the pub-sub model also has some drawbacks. The following table shows its
main drawbacks: www.EnggTree.com
In this section, we’ll explore some use cases of this model, including real-time updates
in online games, smart homes with IoT, and data distribution in data analytics.
www.EnggTree.com
OBSERVER
Observer design pattern falls under the category of behavioral design patterns.
The Observer Pattern maintains a one-to-many relationship among objects, ensuring
that when the state of one object is changed, all of its dependent objects
are simultaneously informed and updated. This design pattern is also referred to
as Dependents.
The observer design patterns when designing a system where several objects
are interested in any possible modification to a specific object. In other words, the
observer design pattern is employed when there is a one-to-many
relationship between objects, such as when one object is updated, its dependent
objects must be automatically notified.
www.EnggTree.com
Real-World Example: If a bus gets delayed, then all the passengers who were
supposed to travel in it get notified about the delay, to minimize inconvenience.
Here, the bus agency is the subject and all the passengers are the observers. All the
passengers are dependent on the agency to provide them with information about
Social Media Platforms for example, many users can follow a particular
person(subject) on a social media platform. All the followers will be updated if
the subject updates his/her profile. The users can follow and unfollow the
subject anytime they want.
Newsletters or magazine subscription
Journalists providing news to the media
E-commerce websites notify customers of the availability of specific products
Consider the following scenario: There is a new laughter club in town, with a grand
opening, it caught the attention of a lot of people interested in being a member of
the club. Thrilled with the overwhelming response, the club owner was a bit worried
about the smooth management and involvement of all the members.
The observer design pattern is the best solution to the owner’s problem. The owner
here is the subject and all thewww.EnggTree.com
members of the club are observers. The observers
have no access to the club’s information and the upcoming events unless the owner
notifies them of it. Also, the members have the option to opt out of the club
whenever they want to. This allows the owner to easily manage and engage all of
the members.
Structure
www.EnggTree.com
The subject delivers events that are intriguing to the observers. These events
occur due to changes in the state of the subject or the execution of certain
behaviors. Subjects have a registration architecture that enables new
observers to join and existing observers to withdraw from the list.
Whenever a new event occurs, the subject iterates through the list of
observers, calling the notify method provided in the 'observer' interface.
The notification interface is declared by the Observer interface. It usually
includes an updating method. The method may include numerous options that
allow the subject to provide event information together with the update.
Concrete Observers do certain activities in response to alerts sent by the
Subject. All the concrete observer classes should implement the base observer
interface, and the subject interface is coupled only with the base observer
interface.
Sometimes, observers want some additional context in order to properly
process any update notified by the Subject. As a result, the Subject frequently
supplies some contextual information as parameters to the notification
function. The subject can pass itself as an argument, allowing the subject to
immediately retrieve any needed data.
Implementation
www.EnggTree.com
When to use this pattern?
Proxy pattern is used when we need to create a wrapper to cover the main object’s
complexity from the client.
TYPES OF PROXIES
Remoteproxy:
They are responsible for representing the object located remotely. Talking to
the real object might involve marshalling and unmarshalling of data and talking to
the remote object. All that logic is encapsulated in these proxies and the client
application need not worry about them.
Virtual proxy:
These proxies will provide some default and instant results if the real object
is supposed to take some time to produce results. These proxies initiate the
operation on real objects and provide a default result to the application. Once the
real object is done, these proxies push the actual data to the client where it has
provided dummy data earlier.
Protection proxy:
If an application does not have access to some resource then such proxies will
talk to the objects in applications that have access to that resource and then get
the result back.
Smart Proxy:
A smart proxy provides additional layer of security by interposing specific
actions when the object is accessed. An example can be to check if the real object
is locked before it is accessed to ensure that no other object can change it.
Some Examples
A very simple real life scenario is our college internet, which restricts few
site access. The proxy first checks the host you are connecting to, if it is not part
of restricted site list, then it connects to the real internet. This example is based
on Protection proxies.
Interface of Internet
package com.saket.demo.proxy;
Benefits:
One of the advantages of Proxy pattern is security.
This pattern avoids duplication of objects which might be huge size and memory
intensive. This in turn increases the performance of the application.
The remote proxy also ensures about security by installing the local code proxy
(stub) in the client machine and then accessing the server with help of the
remote code.
Drawbacks/Consequences:
This pattern introduces another layer of abstraction which sometimes may be an
issue if the RealSubject code is accessed by some of the clients directly and some
of them might access the Proxy classes. This might cause disparate behaviour.
FAÇADE:
The facade pattern (also spelled façade) is a software-design pattern
commonly used in object-oriented programming. Analogous to a facade in architecture,
a facade is an object that serves as a front-facing interface masking more complex
underlying or structural code.
The people walking past the road can only see the glass face of the building.
They do not know anything about it, the wiring, the pipes, and other complexities. It
hides all the complexities of the building and displays a friendly face.
1. Facade is a structural design pattern with the intent to provide a simplified (but
2. A Facade class can often be transformed into a Singleton since a single facade
4. The Facade class will not encapsulate subclasses but will provide a simple
5. Classes in the subsystem will still be available to the client. However, Facade
will decouple client and subsystems so that they can change independently.
different clients.
7. Facade can be recognized in a class that has a simple interface, but delegates
8. Usually, facades manage the full life cycle of objects they use.
9. Client is shielded from the unwanted complexities of the subsystems and gets
www.EnggTree.com
only to a fraction of a subsystem’s capabilities.
10. Abstract Factory can serve as an alternative to Facade when you only want to
hide the way the subsystem objects are created from the client code.
Usage:
1. Use the Facade pattern when you need to have a limited but straightforward
2. Use the Facade when you want to structure a subsystem into layers.
Related Patterns:
www.EnggTree.com
ARCHITECTURAL STYLES
Introduction:
The software needs the architectural design to represents the design of software. IEEE
defines architectural design as “the process of defining a collection of hardware and software
components and their interfaces to establish the framework for the development of a computer
system.” The software that is built for computer-based systems can exhibit one of these many
architectural styles.
Each style will describe a system category that consists of :
A data store will reside at the center of this architecture and is accessed frequently by the
other components that update, add, delete or modify the data present within the store.
The figure illustrates a typical data centered style. The client software access a central
repository. Variation of this approach are used to transform the repository into a blackboard
when data related to client or data of interest for the client change the notifications to client
software.
This data-centered architecture will promote integrability. This means that the existing
components can be changed and new client components can be added to the architecture
without the permission or concern of other clients.
Data can be passed among clients using blackboard mechanism.
Advantage of Data centered architecture
Repository of data is independent of clients
Client work independent of each other
It may be simple to add additional clients.
Modification can be very easy
This kind of architecture is used when input data is transformed into output data through a
series of computational manipulative components.
The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has
a set of components called filters connected by lines.
www.EnggTree.com
Pipes are used to transmitting data from one component to the next.
Each filter will work independently and is designed to take data input of a certain form and
produces data output to the next filter of a specified form. The filters don’t require any
knowledge of the working of neighboring filters.
If the data flow degenerates into a single line of transforms, then it is termed as batch
sequential. This structure accepts the batch of data and then applies a series of sequential
components to transform it.
Advantages of Data Flow architecture
It encourages upkeep, repurposing, and modification.
With this design, concurrent execution is supported.
The disadvantage of Data Flow architecture
It frequently degenerates to batch sequential system
Data flow architecture does not allow applications that require greater user engagement.
It is not easy to coordinate two different but related streams
3] Call and Return architectures: It is used to create a program that is easy to scale and
modify. Many sub-styles exist within this category. Two of them are explained below.
4] Object Oriented architecture: The components of a system encapsulate data and the
operations that must be applied to manipulate the data. The coordination and communication
between the components are established via the message passing.
Characteristics of Object Oriented architecture
Object protect the system’s integrity.
An object is unaware of the depiction of other items.
Advantage of Object Oriented architecture
It enables the designer to separate a challenge into a collection of autonomous objects.
Other objects are aware of the implementation details of the object, allowing changes to be
made without having an impact on other objects.
5] Layered architecture:
A number of different layers are defined with each layer performing a well-defined set of
operations. Each layer will do some operations that becomes closer to machine instruction
set progressively.
At the outer layer, components will receive the user interface operations and at the inner
layers, components will perform the operating system interfacing(communication and
coordination with OS)
Intermediate layers to utility services and application software functions.
One common example of this architectural style is OSI-ISO (Open Systems Interconnection-
International Organisation for Standardisation) communication system.
www.EnggTree.com
Layered architecture:
www.EnggTree.com
Pattern Description
Within the application, each layer of the layered architecture pattern is
responsible for a particular task.
For instance, the user interface and browser communication logic would be
handled by the presentation layer, whilst the business layer would be in charge
of carrying out the specific business rules related to the request.
Each layer of the architecture creates an abstraction around the work required to
complete a specific business requirement.
www.EnggTree.com
For instance, the presentation layer just needs to display customer data on a
screen in a specific way; it is not required to understand or worry about how to
obtain customer data.
The business layer only needs to obtain the data from the persistence layer,
apply business logic to the data (e.g., calculate values or aggregate data), and
then pass that information up to the presentation layer.
In a similar manner, the business layer need not worry about how to format
customer data for display on a screen or even where the customer data is
coming from.
CLIENT-SERVER MODEL
The Client-server model is a distributed application structure that partitions task or workload
between the providers of a resource or service, called servers, and service requesters called
clients. In the client-server architecture, when the client computer sends a request for data to
the server through the internet, the server accepts the requested process and deliver the data
packets requested back to the client.
Servers: Similarly, when we talk the word Servers, It mean a person or medium that serves
something. Similarly in this digital world a Server is a remote computer which provides
information (data) or access to particular services.
So, its basically the Client requesting something and the Server serving it as long as its
present in the database.
The Pipe and Filter architecture is inspired by the Unix technique of connecting
the output of an application to the input of another via pipes on the shell.
The pipe and filter architecture consists of one or more data sources. The data
source is connected to data filters via pipes. Filters process the data they receive,
passing them to other filters in the pipeline. The final data is received at a Data Sink:
www.EnggTree.com
Definition
Pipe and Filter is another architectural pattern, which has independent entities
called filters (components) which perform transformations on data and process the
input they receive, and pipes, which serve as connectors for the stream of data being
transformed, each connected to the next component in the pipeline.
Many systems are required to transform streams of discrete data items, from input to
output. Many types of transformations occur repeatedly in practice, and so it is
desirable to create these as independent, reusable parts, Filters. (Len Bass, 2012)
transformations of streams of data. As you can see in the diagram, the data flows in
one direction. It starts at a data source, arrives at a filter’s input port(s) where
processing is done at the component, and then, is passed via its output port(s) through
a pipe to the next filter, and then eventually ends at the data target.
www.EnggTree.com
A single filter can consume data from, or produce data to, one or more ports.
They can also run concurrently and are not dependent. The output of one filter is the
input of another, hence, the order is very important.
A pipe has a single source for its input and a single target for its output. It
preserves the sequence of data items, and it does not alter the data passing through.
· Filters can be treated as black boxes. Users of the system don’t need to know the
logic behind the working of each filter.
· Re-usability. Each filter can be called and used over and over again.
www.EnggTree.com
However, there are a few drawbacks to this architecture and are discussed below:
The architectural pattern is very popular and used in many systems, such as the text-
based utilities in the UNIX operating system. Whenever different data sets need to be
manipulated in different ways, you should consider using the pipe and filter
architecture. More specific implementations are discussed below:
1. Compilers:
The front-end is responsible for parsing the input language and performing
syntax and semantic and then transforms it into an intermediate language. The middle-
end takes the intermediate representation and usually performs several optimization
steps on it, the resulting transformed program in is passed to the back-end which
transforms it into language B.
Each level consists of several steps as well, and everything together forms the
pipeline of the compiler.
www.EnggTree.com
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface
consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the system,
i.e., understanding, skill and knowledge, type of user, etc., based on the user’s
profile users are made into categories. From each category requirements are
gathered. Based on the requirement’s developer understand how to develop the
interface. Once all the requirements are gathered a detailed analysis is conducted. In
the analysis part, the tasks that the user performs to establish the goals of the system
are identified, described and elaborated. The analysis of the user environment
focuses on the physical work environment. Among the questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
3. Does the interface hardware accommodate space, light, or noise constraints?
4. Are there special human factors considerations driven by environmental factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e., control
mechanisms that enable the user to perform desired tasks. Indicate how these control
mechanisms affect the system. Specify the action sequence of tasks and subtasks,
also called a user scenario. Indicate the state of the system when the user performs a
particular task. Always follow the three golden rules stated by Theo Mandel. Design
issues such as response time, command and action structure, error handling, and help
facilities are considered as the design model is refined. This phase serves as the
foundation for the implementation phase.
might use touch screen, etc., Hence all interaction mechanisms should be
provided.
3. Allow user interaction to be interruptible and undoable: When a user is doing
a sequence of actions the user must be able to interrupt the sequence to do some
other work without losing the work that had been done. The user should also be
able to do undo operation.
4. Streamline interaction as skill level advances and allow the interaction to be
customized: Advanced or highly skilled user should be provided a chance to
customize the interface as user wants which allows different interaction
mechanisms so that user doesn’t feel bored while using the same interaction
mechanism.
5. Hide technical internals from casual users: The user should not be aware of the
internal technical details of the system. He should interact with the interface just
to do his work.
6. Design for direct interaction with objects that appear on-screen: The user
should be able to use the objects and manipulate the objects that are present on
the screen to perform a necessary task. By this, the user feels easy to control over
the screen. www.EnggTree.com
UNIT-4
Testing-Unit testing-Black box testing-White box testing-Integration
and System testing-Regression testing-Debugging-Program Analysis-
Symbolic Execution-Model Checking-Case Study
1. TESTING:
1. UNIT TESTING
Unit testing is a type of software testing that focuses on individual
units or components of a software system.
The purpose of unit testing is to validate that each unit of the
software works as intendedand meets the requirements.
Unit testing is typically performed by developers, and it is
performed early in the development process before the code is integrated
and tested as a whole system.
Unit tests are automated and are run each time the code is changed to
ensure that new code does not break existing functionality. Unit testsare
www.EnggTree.com
1. Black Box Testing: This testing technique is used in covering the unit
tests for input, user interface, and outputparts.
www.EnggTree.com
White box testing techniques analyze the internal structures the used
data structures, internal design, code structure, and the working of the
software rather than just the functionality as in black box testing. It is
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING Page 6
also called glass box testing or clear box testing or structural testing.
www.EnggTree.com
2. Branch Coverage:
In this technique, test cases are designed so that each branch from all
decision points is traversed at least once. In a flowchart, all edges must
www.EnggTree.com
be traversed at least once.
www.EnggTree.com
3.test cases are required such that all branches of all decisions are
covered, i.e, all edges of the flowchart are covered
4. Condition Coverage
In this technique, all individual conditions must be covered as shown
in the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
5. Multiple Condition Coverage
www.EnggTree.com
www.EnggTree.com
www.EnggTree.com
It is comparatively more
It is less exhaustive as compared
exhaustive than black box
to white box testing.
testing.
www.EnggTree.com
Testing. In simple words, all the modules of the system are simply put
together and tested.
This approach is practicable only for very small systems. If an
error is found during the integration testing, it is very difficult to
localize the error as the error may potentially belong to any of the
modules being integrated. So, debugging errors reported during Big
Bang integration testing is very expensive to fix.
Big-bang integration testing is a software testing approach in
which all components or modules of a software application are
combined andtested at once.
This approach is typically used when the software components
have a low degree of interdependence or when there are constraints in
the development environment that prevent testing individual
components.
The goal of big-bang integration testing is to verify the overall
functionality of the system and to identify anyintegration problems that
arise when the components are combined.
While big-bang integration testing can be useful in some
situations, it can also bewww.EnggTree.com
a high-risk approach, as the complexity of the
system and the number of interactions between components can make it
difficult to identify and diagnose problems.
Advantages:
1. It is convenient for small systems.
2. Simple and straightforward approach.
3. Can be completed quickly.
4. Does not require a lot of planning or coordination.
5. May be suitable for small systems or projects with a low
degree of interdependence between components.
Disadvantages:
1. There will be quite a lot of delay because you would have to
wait for all the modules to be integrated.
2. High-risk critical modules are not isolated and tested on
priority since all modules are tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to dentify and
diagnose.
5. This can result in long and complex debugging
CCS356- OBJECT ORIENTED SOFTWARE ENGINEERING Page 2
andtroubleshooting efforts.
6. This can lead to system downtime and increaseddevelopment
costs.
www.EnggTree.com
www.EnggTree.com
System testing
www.EnggTree.com
A type of Software testing, System testing comes at the third level
after Unit testing and Integration testing. The goal of the system
testing is to compare the functional and non-functional features of
the system against the user requirements.
Non-Functional Testing
Non-functional testing checks for non-functional aspects of software, such
as performance, reliability, usability, and application readiness. It intends to
assess a system’s performance per the non-functional conditions that never
appear in the functional tests. Often, non- functional testing is important to
check for security, application load capability, and utility to measure user
satisfaction.
One example of non-functional testing is checking for the number of
users the system can handle at a time. It helps to determine theperformance
and usability of the system under high traffic.
Advantages of Systemwww.EnggTree.com
Testing
End-to-end test that involves all the software components as a
real-life scenarios
www.EnggTree.com
Both functional and non- Only functional tests. Can
functional tests such as be performed in different
Type of
performance, security, approaches like top-down,
testing
regression testing, unit bottom-up, big-bang, and
testing, etc more
www.EnggTree.com
It is the process of testing the modified parts of the code and the
parts that might get affected due to the modifications to ensure that no
new errors have been introduced in the software after the modifications
have been made. Regression means the return of something and in the
software field, it refers to the return of a bug.
When to do regression testing?
When a new functionality is added to the system and the
code has been modified to absorb and integrate that
functionality with the existing code.
When some defect has been identified in the software and
the code is debugged to fix it.
When the code is modified to optimize its working.
Process of Regression testing:
0 seconds of 17 seconds Volume 0% Firstly, whenever we make some
changes to the sourcewww.EnggTree.com
code for any reason like adding new
functionality, optimization, etc. then our program when executed fails
in the previously designed test suite for obvious reasons. After the
failure, the source code is debugged in order to identify the bugs in the
program. After identification of the bugs in the source code,
appropriate modifications are made. Then appropriate test cases are
selected from the already existing test suite which covers all the
modified and affected parts of the source code. We can add new test
cases if required. In the end, regression testing is performed using the
selected test cases.
www.EnggTree.com
Techniques for the selection of Test cases for Regression Testing:
Select all test cases: In this technique, all the test cases are
selected from the already existing test suite. It is the simplest
and safest technique but not much efficient.
Select test cases randomly: In this technique, test cases are
selected randomly from the existing test-suite, but it is only
useful if all the test cases are equally good in their fault
detection capability which is very rare. Hence, it is not used
in most of the cases.
Select modification traversing test cases: In this technique,
only those test cases are selected which covers and tests the
modified portions of the source code the parts which are
affected by these modifications.
Select higher priority test cases: In this technique, priority
codes are assigned to each test case of the test suite based
upon their bug detection capability, customer requirements,
etc. After assigning the priority codes, test cases with the
highest priorities are selected for the process of regression
testing. The test case with the highest priority has the highest
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING Page 2
DEBUGGING
process in place for reporting and tracking bugs, so that they can be
effectively managed and resolved.
In summary, debugging is an important aspect of software
engineering, it’s the process of identifying and resolving errors, or
bugs, in a software system.
There are several common methods and techniques used in
debugging, including code inspection, debugging tools, unit testing,
integration testing, system testing, monitoring, and logging. It is an
iterative process that may take multiple attempts to identify and
resolve all bugs in a software system.
In the context of software engineering, debugging is the process
of fixing a bug in the software. In other words, it refers to identifying,
analyzing, and removing errors.
This activity begins after the software fails to execute properly
and concludes by solving the problem and successfully testing the
software.
It is considered to be an extremely complex and tedious task
because errors need to be resolved at all stages of debugging.
A better approach iswww.EnggTree.com
to run the program within a debugger, which
is a specialized environment for controlling and monitoring the
execution of a program.
The basic functionality provided by a debugger is the insertion of
breakpoints within the code. When the program is executed within the
debugger, it stops at each breakpoint. Many IDEs, such as Visual C++
and C-Builder provide built-in debuggers.
Debugging Process: The steps involved in debugging are:
Problem identification and report preparation.
Assigning the report to the software engineer defect to verify
that it is genuine.
Defect Analysis using modeling, documentation, finding and
testing candidate flaws, etc.
Defect Resolution by making required changes to the
system.
Validation of corrections.
The debugging process will always have one of two outcomes:
1. The cause will be found and corrected.
2. The cause will not be found.
www.EnggTree.com
www.EnggTree.com
PROGRAM ANALYSIS
Improve and maintain software systems over the course of the whole
development life cycle.
Importance of Program Analysis Tools
1. Finding faults and Security Vulnerabilities in the
Code: Automatic programme analysis tools can find and
highlight possible faults, security flaws and bugs in the code.
This lowers the possibility that bugs will get it into
production by assisting developers in identifying problems
early in the process.
2. Memory Leak Detection: Certain tools are designed
specifically to find memory leaks and inefficiencies. By
doing so, developers may make sure that their software
doesn’t gradually use up too much memory.
3. Vulnerability Detection: Potential vulnerabilities like
buffer overflows, injection attacks or other security flaws
can be found using programme analysis tools, particularly
those that are security-focused. For the development of
reliable and secure software, this is essential.
www.EnggTree.com
4. Dependency analysis: By examining the dependencies
among various system components, tools can assist
developers in comprehending and controlling the
connections between modules. This is necessary in order to
make well-informed decisions during refactoring.
5. Automated Testing Support: To automate testing
procedures, CI/CD pipelines frequently combine programme
analysis tools. Only well-tested, high-quality code is
released into production thanks to this integration, helping in
identifying problems early in the development cycle.
Classification of Program Analysis Tools
Program Analysis Tools are classified into two categories:
www.EnggTree.com
achieved. The results of dynamic program analysis tools are in the form
of a histogram or a pie chart. It describes the structural coverage
obtained for different modules of the program.
The output of a dynamic program analysis tool can be stored and
printed easily and provides evidence that complete testing has been
done. The result of dynamic analysis is the extent of testing performed
as whitebox testing. If the testing result is not satisfactory then more test
cases are designed and added to the test scenario. Also dynamic analysis
helps in elimination of redundant test cases.
www.EnggTree.com
4.SYMBOLIC EXECUTION
MODEL CHECKING
Model checking is the most successful approach that’s emerged for
verifying requirements.
The essential idea behind model checking is A model-checking
tool accepts system requirements or design (called models ) and a
property(called specification ) that the final system is expected to
satisfy.
The tool then outputs yes if the given model satisfies given
specifications and generates a counterexample otherwise. The
counterexample details why the model doesn’t satisfy the
specification. By studying the counterexample, you can pinpoint the
source of the error in the model, correct the model, and try again.
www.EnggTree.com
The idea is that by ensuring that the model satisfies enough system
properties, we increase our confidence in the correctness of the
model. The system requirements are called models because they
represent requirements or design.
So what formal language works for defining models? There’s no
single answer, since requirements (or design) for systems in different
application domains vary greatly.
For instance, requirements of a banking system and an aerospace
system differ in size, structure, complexity, nature of system data,
and operations performed.
In contrast, most real-time embedded or safety-critical systems
are control-oriented rather than data-oriented—meaning that
dynamic behavior is much more important than business logic (the
structure of and operations on the internal data maintained by the
system).
Such control-oriented systems occur in a wide variety of
domains: aerospace, avionics, automotive, biomedical
www.EnggTree.com
www.EnggTree.com
1. Conflict Management
Conflict management is the process to restrict the negative features of
conflict while increasing the positive features of conflict. The goalof conflict
management is to improve learning and group results including efficacy or
performance in an organizational setting.
Properly managed conflict can enhance group results.
2. Risk Management
Risk management is the analysis and identification of risks that is followed
by synchronized and economical implementation of resources to minimize,
operate and control the possibility or effect of unfortunate events or to
maximize the realization of opportunities.
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING Page 1
3. Requirement Management
It is the process of analyzing, prioritizing, tracking, and documenting
requirements and then supervising change and communicating to pertinent
stakeholders. It is a continuous process during a project.
4. Change Management
Change management is a systematic approach to dealing with the transition or
transformation of an organization’s goals, processes, or technologies. The
purpose of change management is to execute strategies for effecting change,
controlling change, and helping people to adapt to change.
5. Software Configuration Management
Software configuration management is the process of controlling and tracking
changes in the software, part of the larger cross-disciplinary field of
configuration management. Software configuration management includes
revision control and the inauguration of baselines.
6. Release Management
Release Management is the task of planning, controlling, and scheduling the
built-in deploying releases. Release management ensures that the
organization delivers new and enhanced services required by the customer
while protecting the integrity of existingservices.
www.EnggTree.com
www.EnggTree.com
www.EnggTree.com
www.EnggTree.com
www.EnggTree.com
PROJECT SCHEDULING
Project-task scheduling is a significant project planning activity. It comprises
deciding which functions would be taken up when. To schedule the project
plan, a software project manager wants to do the following:
The first method in scheduling a software plan involves identifying all the
functions required to complete the project. A good judgment of theintricacies
of the project and the development process helps the supervisor to identify
the critical role of the project effectively.
Next, the large functions are broken down into a valid set of small
activities which would be assigned to various engineers. The work
breakdown structure formalism supports the manager to breakdown the
function systematically after the project manager has broken down the
purpose and constructs the work breakdown structure; he has to find the
www.EnggTree.com
dependency among the activities.
Dependency among the various activities determines the order in which
the various events would be carried out. If an activity A necessary the results
of another activity B, then activity A must be scheduled after activity B. In
general, the function dependencies describe a partial ordering among
functions, i.e., each service may precede a subset of other functions, but
some functions might not have any precedence ordering describe between
them (called concurrent function). The dependency among the activities is
defined in the pattern of an activity network.
The project manager tracks the function of a project by audit the timely
completion of the milestones. If he examines that the
www.EnggTree.com
Milestones start getting delayed, and then he has to handle the activities
Carefully so that the complete deadline can still be met.
www.EnggTree.com
DEVOPS: MOTIVATION
DevOps : Motivation
The DevOps is a combination of two words, one is software Development,
and second is Operations. This allows a single team to handle the entire
www.EnggTree.com
application lifecycle, from development to testing, deployment, and
operations. DevOps helps you to reduce the disconnection between software
developers, quality assurance (QA) engineers, and system administrators.
DevOps has become one of the most valuable business disciplines for
enterprises or organizations. With the help of DevOps, quality, and
speed of the application delivery has improved to a great extent.
DevOps is all about the integration of the operations and development process.
Organizations that have adopted DevOps noticed a 22% improvement in
software quality and a 17% improvement in application deployment frequency
and achieve a 22% hike in customer satisfaction. 19% of revenue hikes as a
result of the successful DevOpsimplementation.
CLOUD AS A PLATFORM
There are a ton of ways in which every individual can state the meaning
of the cloud platform. But in the simplest way it can be stated as the
operating system and hardware of a server in an Internet-based data
www.EnggTree.com
centre are referred to as a cloud platform. It enables remote and large-
scale coexistence of software and hardware goods.
Cloud systems come in a range of shapes and sizes. None of them are
suitable for all. To meet the varying needs of consumers, a range of
models, forms, and services are available. They are as follows:
ADVERTISEMENT
o Public Cloud: Third-party providers that distribute computing
services over the Internet are known as public cloud platforms. A
few good examples of trending and mostly used cloud platform are
Google Cloud Platform, AWS (Amazon Web Services), Microsoft
Azure, Alibaba and IBM Bluemix.
o Private Cloud: A private cloud is normally hosted by a third- party
service provider or in an on-site data centre. A private cloud platform is
always dedicated to a single company and it is the key difference
between the public and private cloud. Or we can say that a
private cloud is a series of cloud computing services used primarily by
one corporation or organization.
o Hybrid Cloud: The type of cloud architecture that combines both the
public and private cloud systems is termed to as a Hybrid cloud
platform. Data and programs are easily migrated from one to the other.
This allows the company to be more flexible while still improving
infrastructure, security, and enforcement.
Cost
Cloud storage reduces the upfront costs of purchasing hardware and software,
as well as the costs of setting up and operating on-sitedatacenters-server
racks, round-the-clock power and cooling, and IT professionals to manage the
infrastructure. It quickly adds up.
Global scale
ADVERTISEMENT
Performance
Security
Speed
It means that the huge amount of calculation and the huge data retrieval
as in download and upload can happen just within the blink of an eye,
obviously depending on the configuration.
Reliability www.EnggTree.com
OPERATIONS
www.EnggTree.com
IAAS is used
PAAS is used SAAS is used
by network
by developers. by the end user.
Uses architects.
virtual environment
machines and to deployment
virtual storage. and
development
tools for
application.
It is a cloud
It is a service computing It is a service
model that model that model in cloud
provides delivers tools computing that
virtualized that are used hosts software
computing for the to make it
resources over development available to
the internet. of clients.
Model applications.
www.EnggTree.com
There is no
Some requirement
It requires knowledge is about
technical required for technicalities
knowledge. the basic company
Technical setup. handles
understanding. everything.
It is popular It is popular
among among
It is popular developers consumers and
among who focus on companies,
developers and the such as file
researchers. development sharing, email,
of apps and and
Popularity scripts. networking.
It has about a
It has around a It has around 27 % rise in the
12% 32% cloud
increment. increment. computing
Percentage rise model.
Used by the
Used by mid-
skilled
level Used among
developer to
developers to the users of
develop
build entertainment.
unique
applications.
Usage applications.
Amazon Web
Facebook, and MS Office web,
Services, sun,
Google search Facebook and
vCloud
www.EnggTree.com
engine. Google Apps.
Cloud services. Express.
Operating
System,
Runtime,
Data of the
Middleware, Nothing
application
and
Application
User Controls data
It is highly
scalable to It is highly
It is highly suit the scalable to suit
scalable and different the small, mid
flexible. businesses and enterprise
according to level business
Others resources.
Advantages of IaaS
The resources can be deployed by the provider to a
customer’s environment at any given time.
Its ability to offer the users to scale the business based on
their requirements.
The provider has various options when deploying resources
including virtual machines, applications, storage, and
networks.
www.EnggTree.com
It has the potential to handle an immense number of users.
It is easy to expand and saves a lot of money. Companies
can afford the huge costs associated with the implementation
of advanced technologies.
Cloud provides the architecture.
Enhanced scalability and quite flexible.
Dynamic workloads are supported.
Disadvantages of IaaS
Security issues are there.
Service and Network delays are quite a issue in IaaS.
Advantages of PaaS –
Programmers need not worry about what specific database
or language the application has been programmed in.
It offers developers the to build applications without the
overhead of the underlying operating system or
infrastructure.
Provides the freedom to developers to focus on the
application’s design while the platform takes care of the
language and the database.
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING Page 9
DEPLOYMENT PIPELINE
www.EnggTree.com
improving the end product for the user. By reducing the need for any
manual tasks, teams are able to deploy new code updates much quicker and
with less risk of any human error.
1. Version Control
2. Acceptance Tests
3. Independent Deployment
4. Production Deployment
Teams are able to release new product updates and features much faster.
There is less chance of human error by eliminating manual
steps.
Automating the compilation, testing, and deployment of code allows
developers and other DevOps team members to focus more on
continuously improving and innovating a product.
Troubleshooting is much faster, and updates can be easily rolled back to
a previous working version.
Production teams can better respond to user wants and needs with
faster, more frequent updates by focusing on smaller
Making use of available tools will help to fully automate and get the most out
of your deployment pipeline. When first building a deployment pipeline, there
are several essential tool categories that must be addressed, including source
control, build/compilation, containerization, configuration management, and
monitoring.
www.EnggTree.com
A development pipeline should be constantly evolving, improving and
introducing new tools to increase speed and automation. Some favorite tools
for building an optimal deployment pipeline include:
Jenkins
Azure DevOps
CodeShip
PagerDuty
DEPLOYMENT TOOLS
DevOps tools make the whole deployment process and easy going
CCS356-OBJECT ORIENTED SOFTWARE ENGINEERING Page 5
one and they can help you with the following aspects:
Increased development.
Improvement in operational efficiency.
Faster release.
Non-stop delivery.
Quicker rate of innovation.
Improvement in collaboration.
www.EnggTree.com
DevOps tools make the whole deployment process and easy going
one and they can help you with the following aspects:
Increased development.
Improvement in operational efficiency.
Faster release.
Non-stop delivery. www.EnggTree.com
Quicker rate of innovation.
Improvement in collaboration.
Availability: You must ensure that authorized people and systems can access
the data in your deployment pipeline.
For each of these objectives, ask yourself what would happen if your pipeline
was breached:
Confidentiality: How damaging would it be if data was disclosed to a bad
actor, or leaked to the public?
Integrity: How damaging would it be if data was modified or deleted by a bad
actor?
Availability: How damaging would it be if a bad actor disrupted your data
access?
To make the results comparable across resources, it's useful to introduce
security categories. Standards for Security Categorization (FIPS-199) suggests
using the following four categories:
High: Damage would be severe or catastrophic
Moderate: Damage would be serious
www.EnggTree.com
Low: Damage would be limited
Not applicable: The standard doesn't apply
Depending on your environment and context, a different set of categories could
be more appropriate.
The confidentiality and integrity of pipeline data exist on a spectrum, based on
the security categories just discussed. The following subsections contain
examples of resources with different confidentiality and integrity
measurements:
Resources with low confidentiality, but low, moderate, and high integrity
The following resource examples all have low confidentiality:
Low integrity: Test data
Moderate integrity: Public web server content, policy constraints for your
organization
Resources with medium confidentiality, but low, moderate, and high integrity
The following resource examples all have medium confidentiality:
Low integrity: Internal web server content
Moderate integrity: Audit logs
High integrity: Application configuration files
Resources with high confidentiality, but low, moderate, and high integrity
The following resource examples all have high confidentiality:
Low integrity: Usage data and personally identifiable information
Moderate integrity: Secrets
High integrity: Financial data, KMS keys
www.EnggTree.com
Categorize applications based on the data that they access
When an application accesses sensitive data, the application and the
deployment pipeline that manages the application can also become sensitive.
To qualify that sensitivity, look at the data that the application and the pipeline
need to access.
Once you've identified and categorized all data accessed by an application, you
can use the following categories to initially categorize the application before
you design a secure deployment pipeline:
Confidentiality: Highest category of any data accessed
Integrity: Highest category of any data accessed
Availability: Highest category of any data accessed
This initial assessment provides guidance, but there might be additional factors
to consider—for example:
Categorize cloud resources based on the data and applications they host
Any data or application that you deploy on Google Cloud is hosted by a
Google Cloud resource:
An application might be hosted by an App Engine service, a VM instance, or a
GKE cluster.
Your data might be hosted by a persistent disk, a Cloud Storage bucket, or a
BigQuery dataset. www.EnggTree.com
When a cloud resource hosts sensitive data or applications, the resource and the
deployment pipeline that manages the resource can also become sensitive. For
example, you should consider a Cloud Run service and its deployment pipeline
to be as sensitive as the application that it's hosting.
After categorizing your data and your applications, create an initial security
category for the application. To do so, determine a level from the following
categories:
Confidentiality: Highest category of any data or application hosted
Integrity: Highest category of any data or application hosted
Availability: Highest category of any data or application hosted
When making your initial assessment, don't be too strict—for example:
If you encrypt highly confidential data, treat the encryption key as highly
confidential. But, you can use a lower security category for the resource
containing the data.
If you store redundant copies of data, or run redundant instances of the same
applications across multiple resources, you can make the category of the
resource lower than the category of the data or application it hosts.
Limit the number of sources for input artifacts, particularly third-party sources
Maintain a cache of input artifacts that deployment pipelines can use if source
systems are unavailable
www.EnggTree.com