Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Software Engineering

Download as pdf or txt
Download as pdf or txt
You are on page 1of 72

Unit – 1 Introduction to Software and Software Engineering The Evolving Role of Software, Software: A Crisis on the

Horizon and Software Myths, Software Engineering: A Layered Technology, Software Process Models, The Linear
Sequential Model, The Prototyping Model, The RAD Model, Evolutionary Process Models, Agile Process Model,
Component - Based Development, Process, Product and Process.Agility and Agile Process model, Extreme
Programming, Other process models of Agile Development and Tools.

• Software Engineering Tutorial


Software Engineering Tutorial delivers basic and advanced concepts of Software Engineering. Software Engineering
Tutorial is designed to help beginners and professionals both.

Software Engineering provides a standard procedure to design and develop a software.

Our Software Engineering Tutorial contains all the topics of Software Engineering like Software Engineering Models,
Software Development Life Cycle, Requirement Engineering, Software Design tools, Software Design Strategies,
Software Design levels, Software Project Management, Software Management activities, Software Management
Tools, Software Testing levels, Software Testing approaches, Quality Assurance Vs. Quality control, Manual Testing,
Software Maintenance, Software Re-engineering and Software Development Tool such as CASE Tool.

What is Software Engineering?

The term software engineering is the product of two words, software, and engineering.

The software is a collection of integrated programs.

Software subsists of carefully-organized instructions and code written by developers on any of various particular
computer languages.

Computer programs and related documentation such as requirements, design models and user manuals.

Engineering is the application of scientific and practical knowledge to invent, design, build, maintain, and improve
frameworks, processes, etc.

Software Engineering is an engineering branch related to the evolution of software product using well-defined
scientific principles, techniques, and procedures. The result of software engineering is an effective and reliable
software product.

Why is Software Engineering required?

Software Engineering is required due to the following reasons:

Advertisement
o To manage Large software

o For more Scalability

o Cost Management

o To manage the dynamic nature of software

o For better quality Management

Need of Software Engineering

The necessity of software engineering appears because of a higher rate of progress in user requirements and the
environment on which the program is working.

o Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly, as the measure
of programming become extensive engineering has to step to give it a scientific process.

o Adaptability: If the software procedure were not based on scientific and engineering ideas, it would be
simpler to re-create new software than to scale an existing one.

o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let down the cost of
computer and electronic hardware. But the cost of programming remains high if the proper process is not
adapted.

o Dynamic Nature: The continually growing and adapting nature of programming hugely depends upon the
environment in which the client works. If the quality of the software is continually changing, new upgrades
need to be done in the existing one.

o Quality Management: Better procedure of software development provides a better and quality software
product.

Characteristics of a good software engineer

The features that good software engineers should possess are as follows:

o Exposure to systematic methods, i.e., familiarity with software engineering principles.


o Good technical knowledge of the project range (Domain knowledge).
o Good programming abilities.
o Good communication skills. These skills comprise of oral, written, and interpersonal skills.
o High motivation.
o Sound knowledge of fundamentals of computer science.
o Intelligence.
o Ability to work in a team
o Discipline, etc.

Importance of Software Engineering


The importance of Software engineering is as follows:

1. Reduces complexity: Big software is always complicated and challenging to progress. Software engineering
has a great solution to reduce the complication of any project. Software engineering divides big problems
into various small issues. And then start solving each small issue one by one. All these small problems are
solved independently to each other.

2. To minimize software cost: Software needs a lot of hardwork and software engineers are highly paid experts.
A lot of manpower is required to develop software with a large number of codes. But in software
engineering, programmers project everything and decrease all those things that are not needed. In turn, the
cost for software productions becomes less as compared to any software that does not use software
engineering method.

3. To decrease time: Anything that is not made according to the project always wastes time. And if you are
making great software, then you may need to run many codes to get the definitive running code. This is a
very time-consuming procedure, and if it is not well handled, then this can take a lot of time. So if you are
making your software according to the software engineering method, then it will decrease a lot of time.

4. Handling big projects: Big projects are not done in a couple of days, and they need lots of patience, planning,
and management. And to invest six and seven months of any company, it requires heaps of planning,
direction, testing, and maintenance. No one can say that he has given four months of a company to the task,
and the project is still in its first stage. Because the company has provided many resources to the plan and it
should be completed. So to handle a big project without any problem, the company has to go for a software
engineering method.

5. Reliable software: Software should be secure, means if you have delivered the software, then it should work
for at least its given time or subscription. And if any bugs come in the software, the company is responsible
for solving all these bugs. Because in software engineering, testing and maintenance are given, so there is no
worry of its reliability.

6. Effectiveness: Effectiveness comes if anything has made according to the standards. Software standards are
the big target of companies to make it more effective. So Software becomes more effective in the act with
the help of software engineering.

• Software Evolution – Software Engineering


Software Evolution is a term that refers to the process of developing software initially, and then timely updating it for
various reasons, i.e., to add new features or to remove obsolete functionalities, etc. This article focuses on discussing
Software Evolution in detail.
What is Software Evolution?

The software evolution process includes fundamental activities of change analysis, release planning, system
implementation, and releasing a system to customers.

1. The cost and impact of these changes are accessed to see how much the system is affected by the change
and how much it might cost to implement the change.

2. If the proposed changes are accepted, a new release of the software system is planned.

3. During release planning, all the proposed changes (fault repair, adaptation, and new functionality) are
considered.

4. A design is then made on which changes to implement in the next version of the system.

5. The process of change implementation is an iteration of the development process where the revisions to the
system are designed, implemented, and tested.

Necessity of Software Evolution

Software evaluation is necessary just because of the following reasons:

1. Change in requirement with time: With time, the organization’s needs and modus Operandi of working
could substantially be changed so in this frequently changing time the tools(software) that they are using
need to change to maximize the performance.

2. Environment change: As the working environment changes the things(tools) that enable us to work in that
environment also changes proportionally same happens in the software world as the working environment
changes then, the organizations require reintroduction of old software with updated features and
functionality to adapt the new environment.

3. Errors and bugs: As the age of the deployed software within an organization increases their preciseness or
impeccability decrease and the efficiency to bear the increasing complexity workload also continually
degrades. So, in that case, it becomes necessary to avoid use of obsolete and aged software. All such
obsolete Pieces of software need to undergo the evolution process in order to become robust as per the
workload complexity of the current environment.

4. Security risks: Using outdated software within an organization may lead you to at the verge of various
software-based cyberattacks and could expose your confidential data illegally associated with the software
that is in use. So, it becomes necessary to avoid such security breaches through regular assessment of the
security patches/modules are used within the software. If the software isn’t robust enough to bear the
current occurring Cyber attacks so it must be changed (updated).

5. For having new functionality and features: In order to increase the performance and fast data processing
and other functionalities, an organization need to continuously evolute the software throughout its life cycle
so that stakeholders & clients of the product could work efficiently.
Software Evolution

Laws used for Software Evolution

1. Law of Continuing Change

This law states that any software system that represents some real-world reality undergoes continuous change or
become progressively less useful in that environment.

2. Law of Increasing Complexity

As an evolving program changes, its structure becomes more complex unless effective efforts are made to avoid this
phenomenon.

3. Law of Conservation of Organization Stability

Over the lifetime of a program, the rate of development of that program is approximately constant and independent
of the resource devoted to system development.

4. Law of Conservation of Familiarity

This law states that during the active lifetime of the program, changes made in the successive release are almost
constant.

• Software: A Crisis on the Horizon and Software Myths


Software Crisis – Software Engineering

The term “software crisis” refers to the numerous challenges and difficulties faced by the software industry during
the 1960s and 1970s. It became clear that old methods of developing software couldn’t keep up with the growing
complexity and demands of new projects. This led to high costs, delays, and poor-quality software. New
methodologies and tools were needed to address these issues.

What is Software Crisis?

Software Crisis is a term used in computer science for the difficulty of writing useful and efficient computer programs
in the required time. The software crisis was due to using the same workforce, same methods, and same tools even
though rapidly increasing software demand, the complexity of software, and software challenges. With the increase
in software complexity, many software problems arose because existing methods were insufficient.
Suppose we use the same workforce, same methods, and same tools after the fast increase in software demand,
software complexity, and software challenges. In that case, there arise some issues like software budget problems,
software efficiency problems, software quality problems, software management, and delivery problems, etc. This
condition is called a Software Crisis.

Software Crisis

Causes of Software Crisis

Following are the causes of Software Crisis:

• The cost of owning and maintaining software was as expensive as developing the software.

• At that time Projects were running overtime.

• At that time Software was very inefficient.

• The quality of the software was low quality.

• Software often did not meet user requirements.

• The average software project overshoots its schedule by half.

• At that time Software was never delivered.

• Non-optimal resource utilization.

• Challenging to alter, debug, and enhance.

• The software complexity is harder to change.

Factors Contributing to Software Crisis

Factor Contributing to Software Crisis are:

• Poor project management.

• Lack of adequate training in software engineering.

• Less skilled project members.

• Low productivity improvements.

Solution of Software Crisis:


There is no single solution to the crisis. One possible solution to a software crisis is Software Engineering because
software engineering is a systematic, disciplined, and quantifiable approach. For preventing software crises, there are
some guidelines:

• Reduction in software over budget.

• The quality of the software must be high.

• Less time is needed for a software project.

• Experienced and skilled people working on the software project.

• Software must be delivered.

• Software must meet user requirements.

Questions for Practice:

1. Many causes of the software crisis can be traced to mythology based on [UGC NET 2011]

(A) Management Myths

(B) Customer Myths

(C) Practitioner Myths

(D) All of the Above

Solution: Correct Answer is (D).

Conclusion

Software crisis refers to the challenges faced in developing efficient and useful computer programs due to increasing
complexity and demands. Factors like poor project management, inadequate training, and low productivity
contribute to this crisis. Addressing these issues through systematic approaches like software engineering, with a
focus on budget control, quality, timeliness, and skilled workforce, can mitigate the impact of the crisis.

Frequently Asked Questions related to Software Crisis

What are the main causes of the Software Crisis?

The main causes of the Software Crisis are low-quality Software or when the Software does not meet user
requirements.

Give one of the real examples of a Software Crisis?

One of the famous software failures in computer science is Therac-25. It is a machine that is used to deliver radiation
therapy to Cancer Patients.

What are the impacts of the Software Crisis?

The impact of the Software Crisis is that it affects the development of new software and also creates problems in the
maintenance of older software.

Brief description about Software Myths

Software Myths:

Most, experienced experts have seen myths or superstitions (false beliefs or interpretations) or misleading attitudes
(naked users) which creates major problems for management and technical people. The types of software-related
myths are listed below.
Software Myths in Software Engineering

Software Myths are beliefs that do not have any pure evidence. Software myths may lead to many
misunderstandings, unrealistic expectations, and poor decision-making in software development projects. Some
common software myths include:

o The Myth of Perfect Software: Assuming that it's possible to create bug-free software. In Reality, software is
inherently complex, and it's challenging to eliminate all defects.

o The Myth of Short Development Times: Assuming that software can be developed quickly without proper
planning, design, and testing. In Reality, rushing the development process can lead to better-quality software
and missed deadlines.

o The Myth of Linear Progression: The Development of software is in a linear, predictable manner. In Reality,
development is often iterative and can involve unexpected setbacks and changes.

o The Myth of No Maintenance: It is thought that software development is complete once the initial version is
released. But in reality, the software requires maintenance and updates to remain functional and secure.

o The Myth of User-Developer Mind Reading: It is assumed that developers can only understand user needs
with clear and ongoing communication with users. But in reality, user feedback and collaboration are
essential for correct software development.

o The Myth of Cost Predictability: It is thought that the cost of the software can be easily predicted, but in
reality, many factors can influence project costs, and estimates are often subject to change. There are many
hidden costs available.

o The Myth of Endless Features: It is believed that adding more features to software will make it better. But in
reality, adding more features to the software can make it complex and harder to use and maintain. It may
often lead to a worse user experience.

o The Myth of No Testing Needed: It is assumed that there is no need to test the software if the coder is skilled
or the code looks good. But in reality, thorough testing is essential to catch hidden defects and ensure
software reliability.

o The Myth of One-Size-Fits-All Methodologies: Thinking that a single software development methodology is
suitable for all projects. But in reality, different methodologies should be used on the specific project.

o The Myth of "We'll Fix It Later": It is assumed that a bug can be fixed at a later stage. But in reality, as the
code gets longer and bigger, it takes a lot of work to find and fix the bug. These issues can lead to increased
costs and project delays.
o The Myth of All Developers Are Interchangeable: It is believed that any developer can replace another
without any impact. But in reality, each developer has unique skills and knowledge that can significantly
affect the project. Each one has a different method to code, find, and fix the bugs.

o The Myth of No User Training Required: It is assumed that users will understand and use new software
without any training. But in reality, users need training and documentation to use the new software because
the different methods used by different developers can be unique.

o More Developers Equal Faster Development: It is believed that if there are large no of coders, then the
software development would take less time, and the quality of the software would be of high quality. But in
reality, larger teams can lead to communication overhead and may only sometimes result in faster
development.

o Perfect Software Is Possible: The Idea of creating completely bug-free software is a myth. In Reality, software
development is complex, and it's challenging to eliminate all defects.

o Zero-Risk Software: It is assumed that it's possible to develop software with absolutely no risks. But in reality,
all software projects involve some level of risk, and risk management is a critical part of software
development.

Understanding and addressing these software myths is important for successful software development projects. It
helps in setting realistic expectations, improving communication, and making more informed decisions throughout
the development process.

Disadvantages of Software Myths

Software myths in software engineering can have several significant disadvantages and negative consequences, as
they can lead to unrealistic expectations, poor decision-making, and a lack of alignment between stakeholders. Here
are some of the disadvantages of software myths in software engineering:

o Unrealistic Expectations: Software myths can create disappointment and frustration among the stakeholders
and developers. Sometimes, the fake myth may lead to the no use of the software when it is completely safe.

o Project Delays: Software myths will lead to more delays in the completion of the projects and also increase
the completion time of the projects.

o Poor Quality Software: Myths such as "we can fix it later" or "we don't need extensive testing" can lead to
poor software quality. Neglecting testing and quality assurance can result in buggy and unreliable software.

o Scope Creep: Myths like "fixed requirements" can lead to scope creep as stakeholders may change their
requirements or expectations throughout the project. This can result in a never-ending development cycle.

o Ineffective Communication: Believing in myths can affect good communication within development teams
and between teams and clients. Clear and open communication is crucial for project success, and myths can
lead to misunderstandings and misalignment.

o Wasted Resources: The Idea of getting "the perfect software" can result in the allocation of unnecessary
resources, both in terms of time and money, which could be better spent elsewhere.

o Customer Dissatisfaction: Unrealistic promises made based on myths can lead to customer dissatisfaction.
When software doesn't meet exaggerated expectations, clients may be disappointed and dissatisfied.

o Reduced Productivity: Myths can lead to reduced productivity, as team members may spend time on
unnecessary tasks or follow counterproductive processes based on these myths.

o Increased Risk of Project Failure: The reliance on myths can significantly increase the risk of project failure.
Failure to address these myths can lead to project cancellations, loss of investments, and negative impacts on
an organization's reputation.
o Decreased Competitiveness: Belief in myths can make an organization less competitive in the market. It can
hinder an organization's ability to innovate and adapt.

• Software Engineering: A Layered Technology

Layered Technology in Software Engineering


Understanding Layered Technology
Layered technology is an architectural pattern that separates a software system into separate logical layers. It
is sometimes referred to as layered architecture or layered design. Every layer is in charge of a certain
component of the functionality of the program and it mostly communicates with the layers that are just
above and below it. This division of responsibilities encourages modularity which improves extensibility and
maintainability.
The Layers
1. Presentation Layer:
The highest layer directly interacting with the user interface is this one. Its duties include presenting
information to the user handling user input and rendering the user interface elements. This layer comprises
client side JavaScript, HTML and CSS in web applications.
2. Application Layer:
The application's main functionality is contained in this layer which is sometimes referred to as the business
logic layer. It carries out calculations and enforces business rules and also processes and manages data and
coordinates the interactions of many parts.
3. Domain Layer:
This layer encapsulates the business logic and rules specific to the domain of the application. It defines the
objects, entities and their relationships often represented using models or classes. The domain layer is
independent of any specific implementation or technology.
4. Infrastructure Layer:
Low-level issues including database access, external service interaction and system-to-system communication
are handled by the infrastructure layer. It offers the support required for the higher levels to operate
efficiently.

Benefits of Layered Technology


1. Modularity and Separation of Concerns
A system can be divided into layers with each layer concentrating on a certain functional area. Because of this
division developers are able to work on different aspects of the programme individually which facilitates
understanding, maintenance and extension.
2. Scalability
Layered architectures facilitate scalability. Individual layers can be scaled independently based on the
application needs. For example in a web application if there is a sudden surge in user interactions the
presentation layer can be scaled independently of the backend logic.
3. Reusability
It is common practise to reuse layers between projects or between modules within a project. For example
several client apps or user interfaces can use the same business logic in a well-defined application layer.
4. Interoperability
The clear separation of layers enables easier integration with external systems or services. For instance the
infrastructure layer can be designed to interact with various types of databases or APIs.
5. Testability
Layered architectures promote effective testing. Each layer can be tested independently allowed for unit
testing and also for integration testing and system testing to be performed with precision.
6. Maintainability
Impacts requiring updates or modifications are frequently restricted to a single layer. In addition to lowering
the possibility of unforeseen side effects in other system components this simplifies maintenance.
Real-World Examples
1. Model-View-Controller (MVC):
MVC is among the most popular applications of layered architecture. The domain layer is represented by the
model the presentation layer is represented by the view and the application layer is represented by the
controller.
2. Microservices:
In a microservices architecture each microservice can be seen as a self contained layer responsible for a
specific aspect of the applications functionality. These services interact through well defined APIs.

Implementing Layered Technology in Software Development


1. Choosing the Right Layers
It is critical to choose the right layers for a given project. Although the presentation, application, domain and
infrastructure are the fundamental layers the precise arrangement may differ based on the type of
application. A user interface-centric application might prioritise the presentation layer whereas a data-
intensive application might need a more complex data access layer.
2. Communication between Layers
A coherent application requires effective interlayer communication. Usually the well defined interfaces or
APIs are used to do this. Every layer makes available a collection of services or features that the levels above
it can utilise. Because of this the layers are no longer connected and enabling them to change or evolve
separately from the system as a whole.
3. Managing Dependencies
It is essential to give careful thought to interlayer dependencies in order to prevent tight coupling. Only the
layers directly underneath a layer should constitute its sources of dependence. This lowers the possibility of
unexpected consequences while applying updates or modifications. Effective dependency management often
involves the use of Dependency Injection (DI) frameworks and Inversion of Control (IoC) containers.

• Software Process Models


Software Processes

The term software specifies to the set of computer programs, procedures and associated documents (Flowcharts,
manuals, etc.) that describe the program and how they are to be used.

A software process is the set of activities and associated outcome that produce a software product. Software
engineers mostly carry out these activities. These are four key process activities, which are common to all software
processes. These activities are:

1. Software specifications: The functionality of the software and constraints on its operation must be defined.

2. Software development: The software to meet the requirement must be produced.

3. Software validation: The software must be validated to ensure that it does what the customer wants.

4. Software evolution: The software must evolve to meet changing client needs.

The Software Process Model

A software process model is a specified definition of a software process, which is presented from a particular
perspective. Models, by their nature, are a simplification, so a software process model is an abstraction of the actual
process, which is being described. Process models may contain activities, which are part of the software process,
software product, and the roles of people involved in software engineering. Some examples of the types of software
process models that may be produced are:

1. A workflow model: This shows the series of activities in the process along with their inputs, outputs and
dependencies. The activities in this model perform human actions.
2. 2. A dataflow or activity model: This represents the process as a set of activities, each of which carries out
some data transformations. It shows how the input to the process, such as a specification is converted to an
output such as a design. The activities here may be at a lower level than activities in a workflow model. They
may perform transformations carried out by people or by computers.

3. 3. A role/action model: This means the roles of the people involved in the software process and the activities
for which they are responsible.

There are several various general models or paradigms of software development:

1. The waterfall approach: This takes the above activities and produces them as separate process phases such
as requirements specification, software design, implementation, testing, and so on. After each stage is
defined, it is "signed off" and development goes onto the following stage.

2. Evolutionary development: This method interleaves the activities of specification, development, and
validation. An initial system is rapidly developed from a very abstract specification.

3. Formal transformation: This method is based on producing a formal mathematical system specification and
transforming this specification, using mathematical methods to a program. These transformations are
'correctness preserving.' This means that you can be sure that the developed programs meet its specification.

4. System assembly from reusable components: This method assumes the parts of the system already exist.
The system development process target on integrating these parts rather than developing them from scratch.

Software Crisis

1. Size: Software is becoming more expensive and more complex with the growing complexity and expectation
out of software. For example, the code in the consumer product is doubling every couple of years.

2. Quality: Many software products have poor quality, i.e., the software products defects after putting into use
due to ineffective testing technique. For example, Software testing typically finds 25 errors per 1000 lines of
code.

3. Cost: Software development is costly i.e. in terms of time taken to develop and the money involved. For
example, Development of the FAA's Advanced Automation System cost over $700 per lines of code.

4. Delayed Delivery: Serious schedule overruns are common. Very often the software takes longer than the
estimated time to develop, which in turn leads to cost shooting up. For example, one in four large-scale
development projects is never completed.

Program vs. Software

Software is more than programs. Any program is a subset of software, and it becomes software only if
documentation & operating procedures manuals are prepared.

There are three components of the software as shown in fig:


1. Program: Program is a combination of source code & object code.

2. Documentation: Documentation consists of different types of manuals. Examples of documentation manuals are:
Data Flow Diagram, Flow Charts, ER diagrams, etc.

3. Operating Procedures: Operating Procedures consist of instructions to set up and use the software system and
instructions on how react to the system failure. Example of operating system procedures manuals is: installation
guide, Beginner's guide, reference guide, system administration guide, etc.

• The Linear Sequential Mode


fig: sequence of sequential model
Today I would like to write upon sequential model which is used in software development process. which has
several stages to perform.
The Linear Sequential Model-
Itis also called as the life cycle/waterfall model/software development life cycle. This model suggests a
sequential approach to software development that begins at the level of system and progresses through
analysis, coding, support, testing etc. a linear process flow executes each of the five framework activities
such as:
1) Information/System Engineering and Modeling-
2) Design
3) Code generation
4) Testing
5) Support
1. Information/System Engineering and Modeling-
Software is a part of a larger system or business, work begins by establishing for all system elements and
then allocating some subset of these requirements to software. For the interaction purpose this modeling is
essential. System engineering and analysis encompasses requirements gathering at the system level with
small amount of top level design and analysis.
2. Design-
Software design is actually a multi-step process that focuses on various attributes of a program such as data
structure, software architecture, interface representations, and procedural detail. The design process
translates requirements into representation of the software that can be assessed for quality before coding
begins.
3. Code generation-
The design must be translated into a machine-readable form. The code generation step performs this task. If
design is performed in a detailed manner, code generation can be operate mechanistically. Code generator is
expected to generate code in exact manner that is accurate.
4. Testing
After generating code, program testing begins. The testing process focuses on the logical internals of the
software, ensuring that all statements have been tested, and on the externals that is conducting tests to
uncover errors and ensure that defined input will produce actual results that agree with required results. In
computer programming, unit testing is method by which source of code, sets of one or more computer
program modules together with associated control data, usage procedure are tested to determine if they are
fit for use.
5. Support-
Software will go change after deliver to the customer. Change will be occur because errors have been
encountered as per external environment changes. change in operating system or due to any peripheral
device error may occur.

• The Prototyping Model


The prototype model requires that before carrying out the development of actual software, a working prototype of
the system should be built. A prototype is a toy implementation of the system. A prototype usually turns out to be a
very crude version of the actual system, possible exhibiting limited functional capabilities, low reliability, and
inefficient performance as compared to actual software. In many instances, the client only has a general view of what
is expected from the software product. In such a scenario where there is an absence of detailed information
regarding the input to the system, the processing needs, and the output requirement, the prototyping model may be
employed.
Steps of Prototype Model

1. Requirement Gathering and Analyst

2. Quick Decision

3. Build a Prototype

4. Assessment or User Evaluation

5. Prototype Refinement

6. Engineer Product

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement

2. Good where requirement are changing/uncommitted

3. Regular visible process aids management

4. Support early product marketing

5. Reduce Maintenance cost.

6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final product.

2. Require extensive customer collaboration


o Costs customer money

o Needs committed customer

o Difficult to finish if customer withdraw

o May be too customer specific, no broad market

3. Difficult to know how long the project will last.

4. Easy to fall back into the code and fix without proper requirement analysis, design, customer evaluation, and
feedback.

5. Prototyping tools are expensive.

6. Special tools & techniques are required to build a prototype.

7. It is a time-consuming process.

• The RAD Model


RAD (Rapid Application Development) Model

RAD is a linear sequential software development process model that emphasizes a concise development cycle using
an element based construction approach. If the requirements are well understood and described, and the project
scope is a constraint, the RAD process enables a development team to create a fully functional system within a
concise time period.

RAD (Rapid Application Development) is a concept that products can be developed faster and of higher quality
through:

o Gathering requirements using workshops or focus groups

o Prototyping and early, reiterative user testing of designs

o The re-use of software components

o A rigidly paced schedule that refers design improvements to the next product version

o Less formality in reviews and other team communication


The various phases of RAD are as follows:

1.Business Modelling: The information flow among business functions is defined by answering questions like what
data drives the business process, what data is generated, who generates it, where does the information go, who
process it and so on.

2. Data Modelling: The data collected from business modeling is refined into a set of data objects (entities) that are
needed to support the business. The attributes (character of each entity) are identified, and the relation between
these data objects (entities) is defined.

Advertisement

3. Process Modelling: The information object defined in the data modeling phase are transformed to achieve the
data flow necessary to implement a business function. Processing descriptions are created for adding, modifying,
deleting, or retrieving a data object.

4. Application Generation: Automated tools are used to facilitate construction of the software; even they use the 4th
GL techniques.

5. Testing & Turnover: Many of the programming components have already been tested since RAD emphasis reuse.
This reduces the overall testing time. But the new part must be tested, and all interfaces must be fully exercised.

When to use RAD Model?

o When the system should need to create the project that modularizes in a short span time (2-3 months).

o When the requirements are well-known.

o When the technical risk is limited.

o When there's a necessity to make a system, which modularized in 2-3 months of period.

o It should be used only if the budget allows the use of automatic code generating tools.

Advantage of RAD Model

o This model is flexible for change.

o In this model, changes are adoptable.

o Each phase in RAD brings highest priority functionality to the customer.

o It reduced development time.

o It increases the reusability of features.

Disadvantage of RAD Model

o It required highly skilled designers.

o All application is not compatible with RAD.

o For smaller projects, we cannot use the RAD model.

o On the high technical risk, it's not suitable.

o Required user involvement.

• Evolutionary Process Models


What are Evolutionary Process Models?
Evolutionary model is a combination of Iterative and Incremental model of software development life cycle. In this
article, we are going to understand different types of evolutionary process models with the help of examples.

Software Process Model

A software process model is a structured representation of the activities of the software development process.
During the development of software, various steps that are important for the successful development of the project
are taken and if we structured them according to the proper order in a model then it is called a software process
model. The software process model includes various activities such as steps like planning, designing, implementation,
defining tasks, setting up milestones, roles, and responsibilities, etc.

Evolutionary Process Model

The evolutionary model is based on the concept of making an initial product and then evolving the software product
over time with iterative and incremental approaches with proper feedback. In this type of model, the product will go
through several iterations and come up when the final product is built through multiple iterations. The development
is carried out simultaneously with the feedback during the development. This model has a number of advantages
such as customer involvement, taking feedback from the customer during development, and building the exact
product that the user wants. Because of the multiple iterations, the chances of errors get reduced and the reliability
and efficiency will increase.

Evolutionary Model

Types of Evolutionary Process Models

1. Iterative Model

In the iterative model first, we take the initial requirements then we enhance the product over
multiple iterations until the final product gets ready. In every iteration, some design modifications
were made and some changes in functional requirements is added. The main idea behind this
approach is to build the final product through multiple iterations that result in the final product being
almost the same as the user wants with fewer errors and the performance, and quality would be
high.

2. Incremental Model

In the incremental model, we first build the project with basic features and then evolve the project in
every iteration, it is mainly used for large projects. The first step is to gather the requirements and
then perform analysis, design, code, and test and this process goes the same over and over again
until our final project is ready.

3. Spiral Model
The spiral model is a combination of waterfall and iterative models and in this, we focused on risk
handling along with developing the project with the incremental and iterative approach, producing
the output quickly as well as it is good for big projects. The software is created through multiple
iterations using a spiral approach. Later on, after successive development the final product will
develop, and the customer interaction is there so the chances of error get reduced.

Advantages of the Evolutionary Process Model

1. During the development phase, the customer gives feedback regularly because the customer’s requirement
gets clearly specified.

2. After every iteration risk gets analyzed.

3. Suitable for big complex projects.

4. The first build gets delivered quickly as it used an iterative and incremental approach.

5. Enhanced Flexibility: The iterative nature of the model allows for continuous changes and refinements to be
made, accommodating changing requirements effectively.

6. Risk Reduction: The model’s emphasis on risk analysis during each iteration helps in identifying and
mitigating potential issues early in the development process.

7. Adaptable to Changes: Since changes can be incorporated at the beginning of each iteration, it is well-suited
for projects with evolving or uncertain requirements.

8. Customer Collaboration: Regular customer feedback throughout the development process ensures that the
end product aligns more closely with the customer’s needs and expectations.

Disadvantages of the Evolutionary Process Model

1. It is not suitable for small projects.

2. The complexity of the spiral model can be more than the other sequential models.

3. The cost of developing a product through a spiral model is high.

4. roject Management Complexity: The iterative nature of the model can make project management and
tracking more complex compared to linear models.

5. Resource Intensive: The need for continuous iteration and customer feedback demands a higher level of
resources, including time, personnel, and tools.

6. Documentation Challenges: Frequent changes and iterations can lead to challenges in maintaining accurate
and up-to-date documentation.

7. Potential Scope Creep: The flexibility to accommodate changes can sometimes lead to an uncontrolled
expansion of project scope, resulting in scope creep.

8. Initial Planning Overhead: The model’s complexity requires a well-defined initial plan, and any deviations or
adjustments can be time-consuming and costly.

• Agile Process Model


The meaning of Agile is swift or versatile."Agile process model" refers to a software development approach based on
iterative development. Agile methods break tasks into smaller iterations, or parts do not directly involve long term
planning. The project scope and requirements are laid down at the beginning of the development process. Plans
regarding the number of iterations, the duration and the scope of each iteration are clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts from one to four
weeks. The division of the entire project into smaller parts helps to minimize the project risk and to reduce the
overall project delivery time requirements. Each iteration involves a team working through a full software
development life cycle including planning, requirements analysis, design, coding, and testing before a working
product is demonstrated to the client.

Phases of Agile Model:

Following are the phases in the Agile model are as follows:

1. Requirements gathering

2. Design the requirements

3. Construction/ iteration

4. Testing/ Quality assurance

5. Deployment

6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You should explain business
opportunities and plan the time and effort needed to build the project. Based on this information, you can evaluate
technical and economic feasibility.

2. Design the requirements: When you have identified the project, work with stakeholders to define requirements.
You can use the user flow diagram or the high-level UML diagram to show the work of new features and show how it
will apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work begins. Designers and developers
start working on their project, which aims to deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's performance and looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team receives feedback about the
product and works through the feedback.

Agile Testing Methods:

o Scrum
o Crystal

o Dynamic Software Development Method(DSDM)

o Feature Driven Development(FDD)

o Lean Software Development

o eXtreme Programming(XP)

Advantage(Pros) of Agile Method:

1. Frequent Delivery

2. Face-to-Face Communication with clients.

3. Efficient design and fulfils the business requirement.

4. Anytime changes are acceptable.

5. It reduces total development time.

Disadvantages(Cons) of Agile Model:

1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken throughout various
phases can be misinterpreted at any time by different team members.

2. Due to the lack of proper documentation, once the project completes and the developers allotted to another
project, maintenance of the finished project can become a difficulty.

• Component - Based Development


Component-Based Software Engineering (CBSE) is a process that focuses on the design and development of
computer-based systems with the use of reusable software components.

It not only identifies candidate components but also qualifies each component’s interface, adapts components to
remove architectural mismatches, assembles components into a selected architectural style, and updates
components as requirements for the system change.

The process model for component-based software engineering occurs concurrently with component-based
development.

Component-based development:

Component-based development (CBD) is a CBSE activity that occurs in parallel with domain engineering. Using
analysis and architectural design methods, the software team refines an architectural style that is appropriate for
the analysis model created for the application to be built.

CBSE Framework Activities:

Framework activities of Component-Based Software Engineering are as follows:

1. Component Qualification: This activity ensures that the system architecture defines the requirements of the
components for becoming a reusable components. Reusable components are generally identified through
the traits in their interfaces. It means “the services that are given and the means by which customers or
consumers access these services ” are defined as a part of the component interface.

2. Component Adaptation: This activity ensures that the architecture defines the design conditions for all
components and identifies their modes of connection. In some cases, existing reusable components may not
be allowed to get used due to the architecture’s design rules and conditions. These components should
adapt and meet the requirements of the architecture or be refused and replaced by other, more suitable
components.

3. Component Composition: This activity ensures that the Architectural style of the system integrates the
software components and forms a working system. By identifying the connection and coordination
mechanisms of the system, the architecture describes the composition of the end product.

4. Component Update: This activity ensures the updation of reusable components. Sometimes, updates are
complicated due to the inclusion of third-party (the organization that developed the reusable component
may be outside the immediate control of the software engineering organization accessing the component
currently).

Want to learn Software Testing and Automation to help give a kickstart to your career? Any student or
professional looking to excel in Quality Assurance should enroll in our course, Complete Guide to Software
Testing and Automation, only on GeeksforGeeks. Get hands-on learning experience with the latest testing
methodologies, automation tools, and industry best practices through practical projects and real-life scenarios.
Whether you are a beginner or just looking to build on existing skills, this course will give you the competence
necessary to ensure the quality and reliability of software products. Ready to be a Pro in Software Testing? Enroll
now and Take Your Career to a Whole New Level!

• Extreme Programming
What is Extreme Programming (XP)?

Extreme Programming (XP) is an Agile software development methodology that focuses on delivering high-quality
software through frequent and continuous feedback, collaboration, and adaptation. XP emphasizes a close working
relationship between the development team, the customer, and stakeholders, with an emphasis on rapid, iterative
development and deployment.

Agile development approaches evolved in the 1990s as a reaction to documentation and bureaucracy-based
processes, particularly the waterfall approach. Agile approaches are based on some common principles, some of
which are:

1. Working software is the key measure of progress in a project.

2. For progress in a project, therefore software should be developed and delivered rapidly in small increments.

3. Even late changes in the requirements should be entertained.

4. Face-to-face communication is preferred over documentation.

5. Continuous feedback and involvement of customers are necessary for developing good-quality software.
6. A simple design that involves and improves with time is a better approach than doing an elaborate design up
front for handling all possible scenarios.

7. The delivery dates are decided by empowered teams of talented individuals.

Extreme programming is one of the most popular and well-known approaches in the family of agile methods. an XP
project starts with user stories which are short descriptions of what scenarios the customers and users would like the
system to support. Each story is written on a separate card, so they can be flexibly grouped.

Good Practices in Extreme Programming

Some of the good practices that have been recognized in the extreme programming model and suggested to
maximize their use are given below:

Unit - 2 Managing Software Project Software Metrics (Process, Product and Project Metrics), Software
Project Estimations, Software Project Planning (MS Project Tool), Project Scheduling & Tracking, Risk
Analysis & Management (Risk Identification, Risk Projection, Risk Refinement , Risk Mitigation).
Understanding the Requirement, Requirement Modelling, Requirement Specification (SRS), Requirement
Analysis and Requirement Elicitation, Requirement Engineering. Design Concepts and Design Principal,
Architectural Design,Component Level Design, User Interface Design, Web Application Design

What is Project?

A project is a group of tasks that need to complete to reach a clear result. A project also defines as a set of inputs and
outputs which are required to achieve a goal. Projects can vary from simple to difficult and can be operated by one
person or a hundred.

Projects usually described and approved by a project manager or team executive. They go beyond their expectations
and objects, and it's up to the team to handle logistics and complete the project on time. For good project
development, some teams split the project into specific tasks so they can manage responsibility and utilize team
strengths.

What is software project management?


Software project management is an art and discipline of planning and supervising software projects. It is a sub-
discipline of software project management in which software projects planned, implemented, monitored and
controlled.

It is a procedure of managing, allocating and timing resources to develop computer software that fulfills
requirements.

In software Project Management, the client and the developers need to know the length, period and cost of the
project.

Prerequisite of software project management?

There are three needs for software project management. These are:

1. Time

2. Cost

3. Quality

It is an essential part of the software organization to deliver a quality product, keeping the cost within the client?s
budget and deliver the project as per schedule. There are various factors, both external and internal, which may
impact this triple factor. Any of three-factor can severely affect the other two.

Project Manager

A project manager is a character who has the overall responsibility for the planning, design, execution, monitoring,
controlling and closure of a project. A project manager represents an essential role in the achievement of the
projects.

A project manager is a character who is responsible for giving decisions, both large and small projects. The project
manager is used to manage the risk and minimize uncertainty. Every decision the project manager makes must
directly profit their project.

Role of a Project Manager:

1. Leader

A project manager must lead his team and should provide them direction to make them understand what is expected
from all of them.

2. Medium:

The Project manager is a medium between his clients and his team. He must coordinate and transfer all the
appropriate information from the clients to his team and report to the senior management.

3. Mentor:

He should be there to guide his team at each step and make sure that the team has an attachment. He provides a
recommendation to his team and points them in the right direction.

Responsibilities of a Project Manager:

1. Managing risks and issues.

2. Create the project team and assigns tasks to several team members.

3. Activity planning and sequencing.

4. Monitoring and reporting progress.

5. Modifies the project plan to deal with the situation.


• Software Metrics (Process, Product and Project Metrics)
Software Metrics

A software metric is a measure of software characteristics which are measurable or countable. Software metrics are
valuable for many reasons, including measuring software performance, planning work items, measuring productivity,
and many other uses.

Within the software development process, many metrics are that are all connected. Software metrics are similar to
the four functions of management: Planning, Organization, Control, or Improvement.

Classification of Software Metrics

Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product. The two important
software characteristics are:

1. Size and complexity of software.

2. Quality and reliability of software.

These metrics can be computed for different stages of SDLC.

2. Process Metrics: These are the measures of various characteristics of the software development process. For
example, the efficiency of fault detection. They are used to measure the characteristics of methods, techniques, and
tools that are used for developing software.

Types of Metrics

Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed to be of greater
importance to a software developer. For example, Lines of Code (LOC) measure.

External metrics: External metrics are the metrics used for measuring properties that are viewed to be of greater
importance to the user, e.g., portability, reliability, functionality, usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource metrics. For example,
cost per FP where FP stands for Function Point Metric.

Project metrics: Project metrics are the metrics used by the project manager to check the project's progress. Data
from the past projects are used to collect various metrics, like time and cost; these estimates are used as a base of
new software. Note that as the project proceeds, the project manager will check its progress from time-to-time and
will compare the effort, cost, and time with the original effort, cost and time. Also understand that these metrics are
used to decrease the development costs, time efforts and risks. The project quality can also be improved. As quality
improves, the number of errors and time, as well as cost required, is also reduced.

Advantage of Software Metrics

Comparative study of various design methodology of software systems.

For analysis, comparison, and critical study of different programming language concerning their characteristics.

In comparing and evaluating the capabilities and productivity of people involved in software development.

In the preparation of software quality specifications.

In the verification of compliance of software systems requirements and specifications.

In making inference about the effort to be put in the design and development of the software systems.

In getting an idea about the complexity of the code.

In taking decisions regarding further division of a complex module is to be done or not.

In guiding resource manager for their proper utilization.

In comparison and making design tradeoffs between software development and maintenance cost.

In providing feedback to software managers about the progress and quality during various phases of the software
development life cycle.

In the allocation of testing resources for testing the code.

Disadvantage of Software Metrics

The application of software metrics is not always easy, and in some cases, it is difficult and costly.

The verification and justification of software metrics are based on historical/empirical data whose validity is difficult
to verify.

These are useful for managing software products but not for evaluating the performance of the technical staff.

The definition and derivation of Software metrics are usually based on assuming which are not standardized and may
depend upon tools available and working environment.

Most of the predictive models rely on estimates of certain variables which are often not known precisely.

• Project Size Estimation Techniques – Software Engineering


In the dynamic field of Software Engineering, the accurate estimation of project size is a fundamental aspect that
influences the success of software projects. Project Size Estimation Techniques are essential tools that help in
predicting the required resources, time, and cost, thus ensuring the project’s feasibility and efficiency from the onset.
It is a crucial aspect of software engineering, as it helps in planning and allocating resources for the project.

What is Project Size Estimation?

Project size estimation is determining the scope and resources required for the project.

1. It involves assessing the various aspects of the project to estimate the effort, time, cost, and resources
needed to complete the project.
2. Accurate project size estimation is important for effective and efficient project planning, management, and
execution.

Importance of Project Size Estimation

Here are some of the reasons why project size estimation is critical in project management:

1. Financial Planning: Project size estimation helps in planning the financial aspects of the project, thus helping
to avoid financial shortfalls.

2. Resource Planning: It ensures the necessary resources are identified and allocated accordingly.

3. Timeline Creation: It facilitates the development of realistic timelines and milestones for the project.

4. Identifying Risks: It helps to identify potential risks associated with overall project execution.

5. Detailed Planning: It helps to create a detailed plan for the project execution, ensuring all the aspects of the
project are considered.

6. Planning Quality Assurance: It helps in planning quality assurance activities and ensuring that the project
outcomes meet the required standards.

Who Estimates Projects Size?

Here are the key roles involved in estimating the project size:

1. Project Manager: Project manager is responsible for overseeing the estimation process.

2. Subject Matter Experts (SMEs): SMEs provide detailed knowledge related to the specific areas of the project.

3. Business Analysts: Business Analysts help in understanding and documenting the project requirements.

4. Technical Leads: They estimate the technical aspects of the project such as system design, development,
integration, and testing.

5. Developers: They will provide detailed estimates for the tasks they will handle.

6. Financial Analysts: They provide estimates related to the financial aspects of the project including labor
costs, material costs, and other expenses.

7. Risk Managers: They assess the potential risks that could impact the projects’ size and effort.

8. Clients: They provide input on project requirements, constraints, and expectations.

Different Methods of Project Estimation

1. Expert Judgment: In this technique, a group of experts in the relevant field estimates the project size based
on their experience and expertise. This technique is often used when there is limited information available
about the project.

2. Analogous Estimation: This technique involves estimating the project size based on the similarities between
the current project and previously completed projects. This technique is useful when historical data is
available for similar projects.

3. Bottom-up Estimation: In this technique, the project is divided into smaller modules or tasks, and each task
is estimated separately. The estimates are then aggregated to arrive at the overall project estimate.

4. Three-point Estimation: This technique involves estimating the project size using three values: optimistic,
pessimistic, and most likely. These values are then used to calculate the expected project size using a formula
such as the PERT formula.
5. Function Points: This technique involves estimating the project size based on the functionality provided by
the software. Function points consider factors such as inputs, outputs, inquiries, and files to arrive at the
project size estimate.

6. Use Case Points: This technique involves estimating the project size based on the number of use cases that
the software must support. Use case points consider factors such as the complexity of each use case, the
number of actors involved, and the number of use cases.

7. Parametric Estimation: For precise size estimation, mathematical models founded on project parameters and
historical data are used.

8. COCOMO (Constructive Cost Model): It is an algorithmic model that estimates effort, time, and cost in
software development projects by taking into account several different elements.

9. Wideband Delphi: Consensus-based estimating method for balanced size estimations that combines expert
estimates from anonymous experts with cooperative conversations.

10. Monte Carlo Simulation: This technique, which works especially well for complicated and unpredictable
projects, estimates project size and analyses hazards using statistical methods and random sampling.

Each of these techniques has its strengths and weaknesses, and the choice of technique depends on various factors
such as the project’s complexity, available data, and the expertise of the team.

Estimating the Size of the Software

Estimation of the size of the software is an essential part of Software Project Management. It helps the project
manager to further predict the effort and time that will be needed to build the project. Here are some of the
measures that are used in project size estimation:

1. Lines of Code (LOC)

As the name suggests, LOC counts the total number of lines of source code in a project. The units of LOC are:

1. KLOC: Thousand lines of code

2. NLOC: Non-comment lines of code

3. KDSI: Thousands of delivered source instruction

• Software Project Planning (MS Project Tool)


To manage the Project management system adequately and efficiently, we use Project management tools.

Here are some standard tools:

Gantt chart

Gantt Chart first developed by Henry Gantt in 1917. Gantt chart usually utilized in project management, and it is one
of the most popular and helpful ways of showing activities displayed against time. Each activity represented by a bar.

Gantt chart is a useful tool when you want to see the entire landscape of either one or multiple projects. It helps you
to view which tasks are dependent on one another and which event is coming up.

Advertisement
PERT chart

PERT is an acronym of Programme Evaluation Review Technique. In the 1950s, it is developed by the U.S. Navy to
handle the Polaris submarine missile programme.

In Project Management, PERT chart represented as a network diagram concerning the number of nodes, which
represents events.

The direction of the lines indicates the sequence of the task. In the above example, tasks between "Task 1 to Task 9"
must complete, and these are known as a dependent or serial task. Between Task 4 and 5, and Task 4 and 6, nodes
are not depended and can undertake simultaneously. These are known as Parallel or concurrent tasks. Without
resource or completion time, the task must complete in the sequence which is considered as event dependency, and
these are known as Dummy activity and represented by dotted lines.

Logic Network

The Logic Network shows the order of activities over time. It shows the sequence in which activities are to do.
Distinguishing events and pinning down the project are the two primary uses. Moreover, it will help with
understanding task dependencies, a timescale, and overall project workflow.

Product Breakdown Structure


Product Breakdown Structure (BBS) is a management tool and necessary a part of the project designing. It's a task-
oriented system for subdividing a project into product parts. The product breakdown structure describes subtasks or
work packages and represents the connection between work packages. Within the product breakdown Structure, the
project work has diagrammatically pictured with various types of lists. The product breakdown structure is just like
the work breakdown structure (WBS).

Work Breakdown Structure

It is an important project deliverable that classifies the team's work into flexible segments. "Project Management
Body of Knowledge (PMBOK)" is a group of terminology that describes the work breakdown structure as a
"deliverable-oriented hierarchical breakdown of the work which is performed by the project team."

There are two ways to generate a Work Breakdown Structure ? The top-down and

The bottom-up approach.

In the top-down approach, the WBS derived by crumbling the overall project into subprojects or lower-level tasks.

The bottom-up approach is more alike to a brainstorming exercise where team members are asked to make a list of
low-level tasks which is required to complete the project.

Resource Histogram

The resource histogram is precisely a bar chart that used for displaying the amounts of time that a resource is
scheduled to be worked on over a prearranged and specific period. Resource histograms can also contain the related
feature of resource availability, used for comparison on purposes of contrast.

Critical Path Analysis

Critical path analysis is a technique that is used to categorize the activities which are required to complete a task, as
well as classifying the time which is needed to finish each activity and the relationships between the activities. It is
also called a critical path method. CPA helps in predicting whether a project will expire on time.

• Project Scheduling & Tracking


Project Schedule Tracking Tools in Software Engineering

Project Planning is an important activity performed by Project Managers. Project Managers can use the tools and
techniques to develop, monitor, and control project timelines and schedules. The tracking tools can automatically
produce a pictorial representation of the project plan. These tools also instantly update time plans as soon as new
information is entered and produce automatic reports to control the project. Scheduling tools also look into Task
breakdown and Risk management also with greater accuracy and ease of monitoring the reports. It also provides a
good GUI to effectively communicate with the stakeholders of the project.

Features of Project Scheduling Tools

• Time management: The project scheduling tools keep projects running the way it is planned. There will be
proper time management and better scheduling of the tasks.

• Resource allocation: It provides the resources required for project development. There will be proper
resource allocation and it helps to make sure that proper permissions are given to different individuals
involved in the project. It helps to monitor and control all resources in the project.

• Team collaboration: The project scheduling tool improves team collaboration and communication. It helps to
make it easy to comment and chat within the platform without relying on external software.

• User-friendly interface: Good project scheduling tools are designed to be more user-friendly to enable teams
to complete projects in a better and more efficient way.

Benefits of Project Scheduling Tools

• Defines work tasks: The project scheduling tool defines the work tasks of a project.

• Time and resource management: It helps to keep the project on track with respect to the time and plan.

• Cost management: It helps in determining the cost of the project.

• Improved projectivity: It enables greater productivity in teams as it helps in smarter planning, better
scheduling, and better task delegation.

• Increased efficiency: The project scheduling tool increases speed and efficiency in project development.

Criteria for Selecting Project Scheduling Tools

• Capability to handle multiple projects: The scheduling tool must handle multiple projects at a time.

• User-frinedly: It should be easy to use and must have a user-friendly interface.

• Budget friendly: The tool should be of low cost and should be within the development budget.

• Security features: The tool must be secured and risk-free from vulnerable threats.

Top 10 Project Scheduling Tools

1. Microsoft Project

2. Daily Activity Reporting and Tracking (DART)

3. Monday.com

4. ProjectManager.com

5. SmartTask

6. ProofHub

7. Asana

8. Wrike

9. GanttPRO

10. Zoho Projects

Let’s start discussing each of these tools in detail.

1. Microsoft Project
Microsoft offers a Project Management tool named Microsoft Project for Project Planning activities. It is simple to use
Microsoft Projects for scheduling projects. It generates a variety of reports and templates as per Industry standards.
It can produce data in diagrams or charts in pictorial form. Themes and templates can be customized as per the user.
It supports cloud services and can share data remotely with other users.

Features:

• Creating the project.

• Assigning resources to the task.

• Tracking the process.

• Managing the cost of the project.

• Analysis of the project.

• Automatic / Manual Scheduling for specific tasks or the entire project.

• To accomplish the tasks, deadlines are set.

2. Daily Activity Reporting and Tracking (DART)

DART Daily Activity Reporting Tool, enables you to track the changes to record made by users. Many organizations
use the DART tool which monitors the progress of the software project. DART collects the project data and keeps
track of the activities in the process. DART which tracks the progress of the project is called an indicator that
computes the project plan. The DART is used to scale the software project stakeholders regarding the performance
and updates of the project.

Features:

• Defines work tasks and their Interdependencies.

• Provides the ability to dispatch the email with a summary of changes.

• Tracks the process and changes in the project plan.

• Analyzing the project plan.

• Manages budget for the project.

• Risk Analysis & Management (Risk Identification, Risk Projection, Risk Refinement ,
Risk Mitigation)
What is Risk?

"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem that could cause some loss or
threaten the progress of the project, but which has not happened yet.

These potential issues might harm cost, schedule or technical success of the project and the quality of our software
device, or project team morale.

Risk Management is the system of identifying addressing and eliminating these problems before they can damage
the project.

We need to differentiate risks, as potential issues, from the current problems of the project.

What is Software Risk Analysis in Software Development?


Software risk analysis in Software Development involves identifying which application risks should be tested first. Risk
is the possible loss or harm that an organization might face. Risk can include issues like project management,
technical challenges, resource constraints, changes in requirements, and more Finding every possible risk and
estimating are the two goals of risk analysis. Think about the potential consequences of testing your software and
how it could impact your software when creating a test plan. Risk detection during the production phase might be
costly. Therefore, risk analysis in testing is the best way to figure out what goes wrong before going into production.

Why perform software risk analysis?

Using different technologies, software developers add new features in Software Development. Software system
vulnerabilities grow in combination with technology. Software goods are therefore more vulnerable to
malfunctioning or performing poorly.

Many factors, including timetable delays, inaccurate cost projections, a lack of resources, and security hazards,
contribute to the risks associated with software in Software Development.

Certain risks are unavoidable, some of them are as follows:

• The amount of time you set out to test.

• Flaw leaks can happen in complicated or large-scale applications.

• The client has an immediate requirement to finish the job.

• The specifications are inadequate.

Therefore, it’s critical to identify, priorities, and reduce risk or take proactive preventative action during the software
development process, as opposed to monitoring risk possibilities.

Possible Scenarios of Risk Occurrence

Here are Some Possible Scenario of Software Risk

Unknown Unknowns

These risks are unknown to the organization and are generally technology related risk due to this these risks are not
anticipated. Organizations might face unexpected challenges, delays, or failures due to these unexpected risks. Lack
of experience with a particular tool or technology can lead to difficulties in implementation.

Example

Suppose an organization is using cloud service from third-party vendors, due to some issues third party vendor
unable to provide its service. In this situation organization have to face an unexpected delay.

Known Knowns

These are risks that are well-understood and documented by the team. Since these risks are identified early, teams
can plan for mitigation strategies. The impact of known knowns is usually more manageable compared to unknown
risks.

Example

The shortage of developers is a known risk that can cause delays in software development.

Known Unknowns

In this case, the organization is aware of potential risks, but the certainty of their occurrence is uncertain.
Organization should get ready to deal with these risks if they happen. Ways to deal with them might include making
communication better, making sure everyone understands what’s needed, or creating guidelines for how to manage
possible misunderstandings.

Example
The team may be aware of the risk of miscommunication with the client, but whether it will actually happen is
unknown.

Types of Software Risk

Given below table shows the type of risk and their impact with example:

Type of Risk Description Impact Examples

• Incomplete or
inaccurate
requirements

• Unforeseen
Risks arising from technical
Technical risks can lead to
technical challenges or complexities
delays, cost overruns, and
limitations in the
even software failure if • Integration issues
software development
not properly managed. with third-party
process.
systems

• Inadequate
testing and
Technical risks quality assurance

• Insecure coding
practices

• Lack of proper
Risks related to access controls
Security risks can lead to
vulnerabilities in the
financial losses, • Vulnerabilities in
software that could allow
reputational damage, and third-party
unauthorized access or
legal liabilities. libraries
data breaches.
• Insufficient data
security
Security risks measures

• Inadequate
infrastructure
capacity
Risks associated with the • Inefficient
Scalability risks can lead
software’s ability to algorithms or
to performance
handle increasing data structures
bottlenecks, outages, and
workloads or user
lost revenue. • Lack of scalability
demands.
testing

• Poorly designed
Scalability risks architecture
Type of Risk Description Impact Examples

• Inefficient
algorithms or
data structures
Risks related to the Performance risks can
software’s ability to meet lead to user • Excessive
performance dissatisfaction, lost memory or CPU
expectations in terms of productivity, and usage
speed, responsiveness, competitive • Poor database
and resource utilization. disadvantage. performance

• Network latency
Performance risks issues

• Unrealistic cost
estimates

• Scope creep or
changes in
requirements
Risks associated with Budgetary risks can lead • Unforeseen
exceeding the project’s to financial strain, project expenses, such as
budget or financial delays, and even third-party
constraints. cancellation. licenses or
hardware
upgrades

• Inefficient
resource
Budgetary risks utilization

• Unclear or
ambiguous
contract terms

• Failure to comply
Risks arising from legal or Contractual and legal with intellectual
contractual obligations risks can lead to disputes, property laws
that are not properly delays, and even legal • Data privacy
understood or managed. action. violations

• Lack of proper
documentation
and record-
Contractual & legal risks keeping
Type of Risk Description Impact Examples

• Inadequate
monitoring and
alerting systems

• Lack of proper
Risks associated with the disaster recovery
Operational risks can lead plans
ongoing operation and
to downtime, outages,
maintenance of the • Insufficient
and data loss.
software system. training for
operational staff

• Poor change
management
Operational risks practices

• Unrealistic
timelines or
milestones

• Underestimation
Risks related to delays in Schedule risks can lead to of task
the software increased costs, pressure complexity
development process or on resources, and missed
missed deadlines. market opportunities. • Resource
dependencies or
conflicts

• Unforeseen
Schedule risks events or delays

How to perform software risk analysis in Software Development

In order to conduct risk analysis in software development, first you have to evaluate the source code in detail to
understand its component. This evaluation is done to address components of code and map their interactions. With
the help of the map, transaction can be detected and assessed. The map is subjected to structural and architectural
guidelines in order to recognize and understand the primary software defects. Following are the steps to perform
software risk analysis.
Risk Assessment

The purpose of the risk assessment is to identify and priorities the risks at the earliest stage and avoid losing time
and money.
Under risk assessment, you will go through:

• Risk identification: It is crucial to detect the type of risk as early as possible and address them. The risk types
are classified into

o People risks: related to the people in the software development team

o Tools risks: related to using tools and other software

o Estimation risks: related to estimates of the resources required to build the software

o Technology risks: are related to the usage of hardware or software technologies required to build the
software

o Organizational risks: are related to the organizational environment where the software is being
created.

• Risk analysis: Experienced developers analyze the identified risk based on their experience gained from
previous software . In the next phase, the Software Development team estimates the probability of the risk
occurring and its seriousness

• Risk prioritization: The risk priority can be identified using the formula below

p=r*s

Where,

p stands for priority

r stands for the probability of the risk becoming true or false

s stands for the severity of the risk.

After identifying the risks, the ones with the probability of becoming true and higher loss must be prioritized and
controlled.
Risk control

Risk control is performed to manage the risks and obtain desired results. Once identified, the risks can be classified
into the most and least harmful.

• Understanding the Requirement, Requirement Modelling, Requirement


Specification (SRS)

In software engineering, understanding the concepts of requirement, requirement modeling, and requirement
specification (SRS) is fundamental to developing successful software systems. Let’s explore these concepts:

1. Requirement

A requirement is a description of a feature or functionality that a system must provide or a condition it must satisfy
to fulfill the stakeholders' needs. Requirements are the foundation of any software development process. They are
classified into two types:

• Functional Requirements: These define the specific behavior or functions the system must perform. For
example, "The system must allow users to log in with a username and password."

• Non-Functional Requirements: These define the system’s operational characteristics or constraints, like
performance, security, reliability, and scalability. For example, "The system should handle 1,000 users
concurrently without performance degradation."

Requirements can also be categorized as:

• User requirements: High-level requirements that describe what the end-users expect.

• System requirements: Detailed requirements derived from user requirements, which outline how the system
should function internally.

2. Requirement Modeling

Requirement modeling is the process of representing the system's requirements using diagrams, flowcharts, models,
or structured text. The goal is to understand, analyze, and communicate the requirements effectively to both
technical and non-technical stakeholders. It acts as a bridge between the conceptualization of the system and its
actual design and implementation.

There are several types of requirement models, including:


• Use Case Diagrams: Show interactions between users (actors) and the system, representing functional
requirements.

• Data Flow Diagrams (DFD): Depict how data moves through the system and how inputs are transformed into
outputs.

• Entity-Relationship Diagrams (ERD): Used to model data and its relationships within the system.

• Class Diagrams: Depict objects, their attributes, and the relationships between them in object-oriented
systems.

The primary purpose of requirement modeling is to ensure that requirements are clear, unambiguous, and complete
before moving into the design phase. It also helps in discovering inconsistencies and gaps in the requirements.

3. Requirement Specification (SRS)

The Software Requirement Specification (SRS) is a formal document that outlines all the functional and non-
functional requirements of the system. It serves as a reference for developers, testers, project managers, and
stakeholders throughout the software development lifecycle. The SRS is typically created during the early stages of
the software development process and forms the foundation for designing, developing, and validating the system.

Key Components of an SRS:

1. Introduction:

o Purpose of the system.

o Scope of the project.

o Definitions, acronyms, and abbreviations.

o References and related documents.

2. Overall Description:

o Product perspective (how the system fits into existing workflows or systems).

o Product functions (a high-level overview of system capabilities).

o User characteristics (the end-users of the system).

o Assumptions and dependencies.

3. Functional Requirements:

o Detailed description of the functionalities the system must provide.

o Each requirement is numbered and described in detail (e.g., “The system shall allow users to reset
their passwords”).

4. Non-Functional Requirements:

o Performance, reliability, security, and usability constraints.

o These define how well the system performs tasks, such as response time or user interface design.

5. External Interface Requirements:

o Interaction with other systems (e.g., APIs, databases, hardware).

6. System Models:

o Diagrams like use cases, data flow, and class diagrams that help describe the requirements in a visual
format.
Importance of SRS:

• Clarity: It removes ambiguity by clearly defining what is required from the system.

• Agreement: Ensures that all stakeholders have the same understanding of the system’s functionality.

• Baseline: It serves as a reference point for future development phases, such as design, coding, and testing.

• Testing: The SRS provides a basis for developing test cases to validate the system against the requirements.

Process Flow: From Requirement to SRS

1. Requirement Gathering: Understanding the user needs through interviews, surveys, and stakeholder
meetings.

2. Requirement Analysis: Refining and analyzing requirements to ensure they are feasible, consistent, and
aligned with business goals.

3. Requirement Modeling: Representing the requirements visually to ensure completeness and


comprehension.

4. Requirement Specification: Writing the SRS to formally document all the requirements for the project.

Conclusion

Understanding the requirement, modeling it properly, and documenting it through a formal SRS are critical steps in
the software development process. These ensure that the project proceeds with a clear understanding of what needs
to be built, reducing risks related to scope creep, misunderstandings, and rework later in the development process.

• Requirement Analysis and Requirement Elicitation


In software engineering, Requirement Analysis and Requirement Elicitation are crucial steps in ensuring that a
software project meets the needs of stakeholders and fulfills the intended purpose of the system.

1. Requirement Elicitation

Requirement elicitation is the process of gathering information from stakeholders to understand what they expect
from a software system. It involves identifying, collecting, and articulating the system requirements through various
techniques. The goal is to ensure that the development team clearly understands the needs of the stakeholders.

Techniques of Requirement Elicitation:

• Interviews: Conducting one-on-one or group interviews with stakeholders to understand their needs.

• Surveys/Questionnaires: Using structured forms to gather information from a large number of users or
stakeholders.

• Workshops: Collaborative meetings where stakeholders and developers discuss and brainstorm
requirements.

• Brainstorming: A group session where ideas for the system's requirements are freely suggested.

• Prototyping: Creating an early version of the system for users to interact with and provide feedback.

• Observation: Watching how users perform their tasks in their current systems or environments to gather
insights.

• Document Analysis: Reviewing existing documentation related to the business process, legacy systems, or
business rules.
2. Requirement Analysis

Requirement analysis is the process of refining, clarifying, and organizing the gathered requirements into a structured
format. It involves breaking down high-level requirements into more detailed and clear specifications, ensuring that
they are complete, consistent, and feasible within the project's scope and constraints. The goal of requirement
analysis is to ensure that the software being developed will meet the business goals and the needs of the end-users.

Key Activities in Requirement Analysis:

• Classification and Prioritization: Grouping requirements into categories (e.g., functional vs. non-functional)
and prioritizing them based on importance and project constraints.

• Conflict Resolution: Resolving conflicting requirements from different stakeholders.

• Feasibility Study: Determining whether the proposed requirements are technically, financially, and legally
viable.

• Consistency Check: Ensuring that there are no conflicting or redundant requirements and that all the
requirements align with the business goals.

• Documentation: Preparing structured documentation such as use cases, process models, and scenarios to
communicate requirements to both technical and non-technical stakeholders.

3. Difference Between Requirement Elicitation and Requirement Analysis:

Aspect Requirement Elicitation Requirement Analysis


Gathering raw information and
Focus Refining and organizing the gathered requirements.
requirements from stakeholders.
Interviews, workshops, surveys, Feasibility studies, prioritization, conflict
Techniques
observation, document analysis. resolution.
Structured, clear, and detailed requirements
Output Initial set of requirements (raw data).
(finalized data).
Ensure that the requirements are feasible, complete,
Goal Understand what the stakeholders need.
and aligned with project goals.

• Conclusion
• Requirement Elicitation helps in understanding what stakeholders expect from the system, while
Requirement Analysis ensures that those expectations are viable and clear. Together, they form the
foundation for developing high-quality software that meets users' needs effectively.

• Requirement Engineering
Requirements engineering (RE) refers to the process of defining, documenting, and maintaining requirements in the
engineering design process. Requirement engineering provides the appropriate mechanism to understand what the
customer desires, analyzing the need, and assessing feasibility, negotiating a reasonable solution, specifying the
solution clearly, validating the specifications and managing the requirements as they are transformed into a working
system. Thus, requirement engineering is the disciplined application of proven principles, methods, tools, and
notation to describe a proposed system's intended behavior and its associated constraints.

Requirement Engineering Process

It is a four-step process, which includes -


1. Feasibility Study

2. Requirement Elicitation and Analysis

3. Software Requirement Specification

4. Software Requirement Validation

5. Software Requirement Management

1. Feasibility Study:

The objective behind the feasibility study is to create the reasons for developing the software that is acceptable to
users, flexible to change and conformable to established standards.

Types of Feasibility:

1. Technical Feasibility - Technical feasibility evaluates the current technologies, which are needed to
accomplish customer requirements within the time and budget.

2. Operational Feasibility - Operational feasibility assesses the range in which the required software performs a
series of levels to solve business problems and customer requirements.

3. Economic Feasibility - Economic feasibility decides whether the necessary software can generate financial
profits for an organization.

• Design Concepts and Design Principal


Design Concepts Introduction: Software design encompasses the set of principles, concepts, and
practices that lead to the development of a high-quality system or product. Design principles
establish an overriding philosophy that guides you in the design work you must perform. Design is
pivotal to successful software engineering The goal of design is to produce a model or
representation that exhibits firmness, commodity, and delight Software design changes continually
as new methods, better analysis, and broader understanding evolve
Software design principles are concerned with providing means to handle the complexity of the design process
effectively. Effectively managing the complexity will not only reduce the effort needed for design but can also reduce
the scope of introducing errors during design.

Following are the principles of Software Design

Problem Partitioning

For small problem, we can handle the entire problem at once but for the significant problem, divide the problems
and conquer the problem it means to divide the problem into smaller pieces so that each piece can be captured
separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning

1. Software is easy to understand

2. Software becomes simple

3. Software is easy to test

4. Software is easy to modify

5. Software is easy to maintain

6. Software is easy to expand

These pieces cannot be entirely independent of each other as they together form the system. They have to cooperate
and communicate to solve the problem. This communication adds complexity.

• Architectural Design

Architectural Design
For the program to represent software design, architectural design is required. "The process of defining a
collection of hardware and software components and their interfaces to establish the framework for the
development of a computer system" is how the IEEE defines architectural design. The following tasks are
carried out by an architectural design. One of these numerous architectural styles can be seen in software
designed for computer-based systems.
Every style shall outline a system category made up of the following:
o A collection of parts (such as computing modules and databases) that together will carry out a task that the
system needs.
o
o The connector set will facilitate the parts' cooperation, coordination, and communication.
o Requirements that specify how parts can be combined to create a system.
o Semantic models aid in the designer's comprehension of the system's general characteristics.
Software requirements should be converted into an architecture that specifies the components and top-level
organization of the program. This is achieved through architectural design, also known as system design,
which serves as a "blueprint" for software development. Architectural design is "the process of defining a
collection of hardware and software components and their interfaces to establish the framework for
developing a computer system," according to the IEEE definition. The software requirements document is
examined to create this framework, and a methodology for supplying implementation details is designed.
The system's constituent parts and their inputs, outputs, functions, and interplay are described using these
specifics.
1. It establishes an abstraction level where the designers can specify the system's functional and performance
behavior.
2. By outlining the aspects of the system that are easily modifiable without compromising its integrity, it serves
as a guide for improving the system as necessary.
3. It assesses every top-tier design.
4. It creates and records the high-level interface designs, both internal and external.
5. It creates draft copies of the documentation for users.
6. It outlines and records the software integration timetable and the initial test requirements.
7. The following is a list of the sources for architectural design.
8. Information on the software development project's application domain
9. Making use of data flow charts
10. The accessibility of architectural patterns and styles.
Architectural design is paramount in software engineering, where fundamental requirements like
dependability, cost, and performance are addressed. As the paradigm for software engineering shifts away
from monolithic, standalone, built-from-scratch systems and toward componentized, evolvable, standards-
based, and product line-oriented systems, this task is challenging. Knowing precisely how to move from
requirements to architectural design is another significant challenge for designers. Designers use reusability,
componentization, platform-based, standards-based, and more to avoid these issues.
Even though developers are in charge of the architectural design, others like user representatives, systems
engineers, hardware engineers, and operations staff are also involved. All stakeholders must be consulted
when reviewing the architectural design to reduce risks and errors.
Components of Architectural Design
High-level organizational structures and connections between system components are established during
architectural design's crucial software engineering phase. It is the framework for the entire software project
and greatly impacts the system's effectiveness, maintainability, and quality. The following are some essential
components of software engineering's architectural design:
o System Organization: The architectural design defines how the system will be organized into various
components or modules. This includes identifying the major subsystems, their responsibilities, and how they
interact.
o Abstraction and Decomposition: Architectural design involves breaking down the system into smaller,
manageable parts. This decomposition simplifies the development process and makes understanding and
maintaining the system easier.
o Design Patterns: Using design patterns, such as Singleton, Factory, or Model-View-Controller (MVC), can help
standardize and optimize the design process by providing proven solutions to common architectural
problems.
o Architectural Styles: There are various architectural styles, such as layered architecture, client-server
architecture, microservices architecture, and more. Choosing the right style depends on the specific
requirements of the software project.
o Data Management: Architectural design also addresses how data will be stored, retrieved, and managed
within the system. This includes selecting the appropriate database systems and defining data access
patterns.
o Interaction and Communication: It is essential to plan how various parts or modules will talk to and interact
with one another. This includes specifying message formats, protocols, and APIs.
o Scalability: The architectural plan should consider the system's capacity for expansion and scalability.
Without extensive reengineering, it ought to be able to handle increased workloads or user demands.
o Security: The architectural design should consider security factors like access control, data encryption, and
authentication mechanisms.
o Optimization and performance: The architecture should be created to satisfy performance specifications.
This could entail choosing the appropriate technologies, optimizing algorithms, and effectively using
resources.
o Concerns with Cross-Cutting: To ensure consistency throughout the system, cross-cutting issues like logging,
error handling, and auditing should be addressed as part of the architectural design.
o Extensibility and Flexibility: A good architectural plan should be adaptable and extensible to make future
changes and additions without seriously disrupting the existing structure.
o Communication and Documentation: The development team and other stakeholders must have access to
clear documentation of the architectural design to comprehend the system's structure and design choices.
o Validation and Testing: Plans for how to test and validate the system's components and interactions should
be included in the architectural design.
o Maintainability: Long-term maintenance of the design requires considering factors like code organization,
naming conventions, and modularity.
o Cost factors to consider: The project budget and resource limitations should be considered when designing
the architecture.
The architectural design phase is crucial in software development because it establishes the system's overall
structure and impacts decisions made throughout the development lifecycle. A software system that meets
the needs of users and stakeholders can be more efficient, scalable, and maintainable thanks to a well-
thought-out architectural design. It also gives programmers a foundation to build the system's code.
Properties of Architectural Design
Several significant traits and qualities of architectural design in software engineering are used to direct the
creation of efficient and maintainable software systems. A robust and scalable architecture must have these
characteristics. Some of the essential characteristics of architectural design in software engineering are as
follows:
o Modularity:
Architectural design encourages modularity by dividing the software system into smaller, self-contained
modules or components. Because each module has a clear purpose and interface, modularity makes the
system simpler to comprehend, develop, test, and maintain.
o Scalability:
Scalability should be supported by a well-designed architecture, enabling the system to handle increased
workloads and growth without extensive redesign. Techniques like load balancing, distributed systems, and
component replication can be used to achieve scalability.
o Maintainability:
A software system's architectural design aims to make it maintainable over time. This entails structuring the
system to support quick updates, improvements, and bug fixes. Maintainability is facilitated by clear
documentation and adherence to coding standards.
o Flexibility:
The flexibility of architectural design should allow for easy adaptation to shifting needs. It should enable the
addition or modification of features without impairing the functionality of the current features. Design
patterns and clearly defined interfaces are frequently used to accomplish this.
o Reliability:
A strong architectural plan improves the software system's dependability. It should reduce the likelihood of
data loss, crashes, and system failures. Redundancy and error-handling procedures can improve reliability.
o Performance:
A crucial aspect of architectural design is performance. It entails fine-tuning the system to meet performance
standards, including throughput, response time, and resource utilization. Design choices like data storage
methods and algorithm selection greatly influence performance.
o Security:
Architectural design must take security seriously. The architecture should include security measures such as
access controls, encryption, authentication, and authorization to safeguard the system from potential threats
and vulnerabilities.
o Distinguishing Concerns:
By enforcing a clear separation of concerns, architectural design ensures that various system components-
such as the user interface, business logic, and data storage-are arranged and managed independently. The
separation makes maintenance, testing, and development easier.
o Usability:
The system's usability and user experience should be considered when making architectural decisions. User
interfaces and workflows must be designed to ensure users can interact with the software effectively and
efficiently.
o Documentation:
Architectural design that works is extensively documented. Developers and other stakeholders can refer to
the documentation, which explains the design choices, components, and reasoning behind them. It improves
understanding and communication.
o Price-Performance:
The architectural plan should take the project's resources and budget into consideration. It entails choosing
technologies, resources, and development initiatives wisely and economically.
o Validation and Testing:
The architectural design should include plans for evaluating and verifying the interactions and parts of the
system. This guarantees that the system meets the requirements and operates as intended.
1. Structure and Clarity: The organization of the software system is represented in a clear and organized
manner by architectural design. It outlines the elements, their connections, and their duties. This clarity
makes it easier for developers to comprehend how various system components work together and
contribute to their functionality. Comprehending this concept is essential for effective development and
troubleshooting.
2. Modularity: In architectural design, modularity divides a system into more manageable, independent
modules or components. Because each module serves a distinct purpose, managing, testing, and maintaining
it is made simpler. Developers can work on individual modules independently, improving teamwork and
lessening the possibility of unexpected consequences from changes.
3. Scalability: A system's scalability refers to its capacity to accommodate growing workloads and expand over
time. Thanks to an architectural design that supports scalability, the system can accommodate more users,
data, and transactions without requiring a major redesign. Systems that must adjust to shifting user needs
and business requirements must have this.
4. Maintenance and Expandability: The extensibility and maintenance of software are enhanced by
architectural design. Upgrades, feature additions, and bug fixes can be completed quickly and effectively with
an organized architecture. It lowers the possibility of introducing new problems during maintenance, which
can greatly benefit software systems that last a long time.
5. Performance Optimization: Performance optimization ensures the system meets parameters like response
times and resource usage. Architectural design allows choosing effective algorithms, data storage plans, and
other performance-boosting measures to create a responsive and effective system.
6. Security: An essential component of architectural design is security. Access controls, encryption, and
authentication are a few security features that can be incorporated into the architecture to protect sensitive
data and fend off attacks and vulnerabilities. A secure system starts with a well-designed architecture.
7. Reliability: When a system is reliable, it operates as planned and experiences no unplanned malfunctions. By
structuring the system to handle errors and recover gracefully from faults, architectural design helps
minimize failures. Moreover, it makes it possible to employ fault-tolerant and redundancy techniques to raise
system reliability.

Disadvantages of Architectural Design


1. Initial Time Investment: Developing a thorough architectural design can take a while, particularly for
complicated projects. The project may appear to be delayed during this phase as developers and architects
devote time to planning and making decisions. However, by offering a clear roadmap, this initial investment
can save time in later stages of development.
2. Over-Engineering: When features or complexity in the architectural design are redundant or unnecessary for
the project's objectives, it's known as over-engineering. When developers work on components that add
little value to the final product, this can result in longer development times and higher development costs.
3. Rigidity: It can be difficult to adjust an architecture that is too rigid to new requirements or technological
advancements. The architecture may make it more difficult for the system to adapt and change to meet
changing business needs if it is overly rigid and does not permit changes.
4. Complexity: Comprehending and maintaining complex architectural designs can be challenging, particularly
for developers not part of the original design process. Because it is more difficult to manage and
troubleshoot, an excessively complex architecture may lead to inefficiencies and increased maintenance
costs.
5. Misalignment with Requirements: Occasionally, there may be a discrepancy between the architectural plan
and the actual project specifications, leading to needless complications or inefficiencies. This misalignment
may require additional labour and modifications to guarantee the system achieves its objectives.
6. Resistance to Change: Even when required, significant changes may encounter resistance once an
architectural design is established. This might result from the money spent on the current architecture or
worries about possible project delays.
7. Resource Intensive: A complex architectural design may require specialized resources to develop and
maintain, such as architects, documentation efforts, and quality assurance. Project costs and management
overhead may arise due to these increased resource demands.
8. Communication Challenges: Interpreting architectural design documents can be difficult, especially if they
are unclear or not efficiently shared with all team members and stakeholders. Deviations from the intended
design may result from misunderstandings or misinterpretations.
9. Risk of Overlooked Issues: There's a chance that, in extremely complex architectural designs, possible
problems or challenges will be missed, resulting in unforeseen issues during implementation. Later in the
development process, these problems might appear, delaying things and requiring more work.
10. Resource Intensive: A complex architectural design may require specialized resources to develop and
maintain, such as architects, documentation efforts, and quality assurance. Project costs and management
overhead may arise due to these increased resource demands.
11. Communication Challenges: Interpreting architectural design documents can be difficult, especially if they
are unclear or not efficiently shared with all team members and stakeholders. Deviations from the intended
design may result from misunderstandings or misinterpretations.
12. Risk of Overlooked Issues: There's a chance that, in extremely complex architectural designs, possible
problems or challenges will be missed, resulting in unforeseen issues during implementation. Later in the
development process, these problems might appear, delaying things and requiring more work.
These drawbacks can be lessened with cautious planning, clear communication, and harmony between the
demands of the project and design rigour. The aim is to create an architectural design that fulfils the project's
needs without adding needless rigidity or complexity.

• Component Level Design


Component design is all about taking complex software systems and making them into small, reusable pieces
or simply modules. These parts are responsible for directing certain functionalities, so programming them is
like building a puzzle with small pieces, which eventually create more complex architectures. Component
design exhibits a modularized approach to software development where the units are organized
systematically to facilitate control of complexity and increase manageability.
• Elements being the fundamental building blocks of software architecture, they are mostly responsible for
enriching the software’s functionality or offering their services.
• When several features are executed in independent units called components, with this approach modularity,
reusability, and maintainability are ensured.
• This technique promotes flexibility and scalability, enabling the formation of dynamic systems that are
composed of more basic units that interlock each other.

Characteristics of Component-Based Design


Below are the characteristics of component-based design:
• Modularity: Some parts of a program retain a certain function or service, essentially serve as reusable
software modules, interchangeable and therefore independent components, enhancing flexibility in
development, testing, and maintenance.
• Reusability: Component architectural features are envisioned to be adaptable to and reusable on different
projects giving rise to shorter development time and lower overheads and making the work of developers
much more easier and scientific.
• Interoperability: Components are connected through clearly defined interfaces, ensuring that software
systems maintain interaction continuity and carry out various functions within their ecosystems.
• Encapsulation: Components have got encapsulated constructs, which precisely reveal internal details at the
interface level, through a provision of only a few interfaces essential for interacting with other components,
thus enforcing abstraction by hiding underlying implementation details.
• Scalability: Component-based architectures simplify scalability by providing a way for systems to grow
organically via the addition or modification of components without the impacting overall the architecture.

Types of Components
• UI Components
o User Interface components provide an easy and more convenient way to encapsulate logic by
combining presentational and visible elements such as buttons, forms, and widgets.
• Service Components
o Service components are the base of business logic or application services, in which they serve as the
platform for activities such as data processing, authentication, and communication with external
systems.
• Data Components
o Through data abstraction and provision of interfaces for data access, data components take care of
database interaction issues and provide data structures for querying, updating, and saving data.
• Infrastructure Components
o The hardware elements regard as fundamental services or resources like logging, caching, security
and communication protocols which a software system depends on.

• User Interface Design


The visual part of a computer application or operating system through which a client interacts with a
computer or software. It determines how commands are given to the computer or the program and how
data is displayed on the screen.
Types of User Interface
There are two main types of User Interface:
o Text-Based User Interface or Command Line Interface
o Graphical User Interface (GUI)
Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is UNIX.
Advantages
o Many and easier to customizations options.
o Typically capable of more important tasks.
Disadvantages
o Relies heavily on recall rather than recognition.
o Navigation is often more difficult.
Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this type of
interface is any versions of the Windows operating systems.

• Web Application Design.

In software engineering, Web Application Design refers to the process of planning, conceptualizing, and
structuring the interface, architecture, and user interactions of a web-based application. A well-designed
web application ensures that it is user-friendly, efficient, scalable, secure, and responsive across different
devices. Web application design encompasses both front-end and back-end aspects, and involves several
phases and key principles.
Key Components of Web Application Design
1. User Interface (UI) Design
o Focuses on the layout and appearance of the web application.
o Ensures that the interface is intuitive, aesthetically pleasing, and aligned with the application's
purpose.
o Uses elements like buttons, forms, navigation menus, and other controls to allow users to interact
with the application.
2. User Experience (UX) Design
o Ensures that the web application is user-centered, providing an easy, satisfying, and enjoyable
experience.
o Emphasizes the ease of navigation, accessibility, and the clarity of content.
o UX design involves creating wireframes, user journeys, and interaction flows to ensure seamless user
interaction.
3. Front-End Development
o Deals with the client side of the application, which includes everything that the user interacts with
directly.
o Technologies used include:
▪ HTML (HyperText Markup Language): For structuring the content.
▪ CSS (Cascading Style Sheets): For styling and visual design.
▪ JavaScript: For dynamic interactions and functionality, like form validation, real-time
updates, etc.
o Front-end frameworks and libraries like React.js, Angular, and Vue.js are often used to speed up
development.
4. Back-End Development
o Focuses on the server side, handling data processing, business logic, and database management.
o Technologies used include:
▪ Server-Side Languages: Such as Node.js, Python, Ruby on Rails, PHP, Java, or ASP.NET.
▪ Databases: To store, retrieve, and manage data. Common databases include MySQL,
PostgreSQL, MongoDB, and SQL Server.
o Ensures proper communication between the front-end and back-end through APIs (Application
Programming Interfaces).
5. Database Design
o Involves designing the database structure to store and manage data efficiently.
o Ensures that the database is normalized, and relationships between tables (entities) are well defined.
o Relational databases (like MySQL) and NoSQL databases (like MongoDB) are commonly used
depending on the type and scale of data.
6. Architecture Design
o The architecture of a web application defines how its components and services are organized and
interact with each other.
o Common architectural styles include:
▪ Monolithic Architecture: Where the application is built as a single, unified system.
▪ Microservices Architecture: Where the application is broken into smaller, independently
deployable services.
▪ MVC (Model-View-Controller) Architecture: Separates the application logic into three
components—Model (data), View (UI), and Controller (logic).
7. Security Design
o Ensures that the application is protected from various threats, such as data breaches, unauthorized
access, and cyberattacks.
o Key practices include:
▪ Authentication (verifying the identity of users) and authorization (ensuring that users have
permissions to access certain resources).
▪ Data Encryption (using SSL/TLS for secure data transmission).
▪ Input Validation (to prevent security vulnerabilities like SQL injection and Cross-Site Scripting
(XSS)).
8. Performance Optimization
o A critical part of web application design, focusing on making the application fast and responsive.
o Techniques include:
▪ Caching: Storing frequently accessed data temporarily to reduce load times.
▪ Content Delivery Networks (CDNs): Distributing static content across servers globally to
improve access times.
▪ Database Optimization: Ensuring queries are efficient and indexing is used properly.
9. Responsive Design
o Ensures that the web application works across a wide range of devices (desktops, tablets,
smartphones) and screen sizes.
o Achieved using CSS media queries and responsive frameworks like Bootstrap or Foundation.
10. API Design
o If the web application needs to communicate with other applications, services, or systems, API
design becomes crucial.
o Common choices are RESTful APIs and GraphQL.
o Ensures secure, scalable, and efficient interaction between client and server.
Phases of Web Application Design
1. Requirements Gathering
o Understanding the needs of the stakeholders, users, and business goals.
o Requirements may include functional requirements (features) and non-functional requirements
(performance, security, etc.).
2. Conceptual Design
o Sketching the layout and structure of the web application.
o Involves creating user personas, use case scenarios, wireframes, and prototypes to visualize the flow
and interface.
o Tools like Figma, Sketch, and Adobe XD are commonly used for creating wireframes and UI/UX
design prototypes.
3. Designing the Architecture
o Designing how different parts of the system will communicate.
o This includes database schema design, API design, and defining the interaction between the front-
end, back-end, and databases.
4. Development
o Implementing both the front-end and back-end of the web application based on the architecture.
o During development, code management tools like Git and GitHub are used to manage versions and
collaborate on the project.
5. Testing
o Testing the web application for functionality, usability, performance, security, and compatibility
across different browsers and devices.
o Types of testing include:
▪ Unit Testing: Testing individual units or components.
▪ Integration Testing: Ensuring components work well together.
▪ End-to-End Testing: Testing the entire application from start to finish.
▪ Load Testing: Testing how the application performs under heavy load.
▪ Cross-browser Testing: Ensuring compatibility across different web browsers.
6. Deployment
o Deploying the web application to a web server, making it accessible over the internet.
o Common hosting services include AWS, Google Cloud Platform (GCP), Microsoft Azure, or Heroku.
7. Maintenance and Updates
o After deployment, the web application requires continuous monitoring, bug fixing, and updates to
improve functionality or security.
Web Application Design Architecture (Diagram)
A typical web application architecture might look something like this:
sql
कोड कॉपी करें
+-----------------------+
| Client (Browser) |
+-----------------------+
|
v
+-----------------------------+
| Front-End (UI) |
| (HTML, CSS, JavaScript) |
+-----------------------------+
|
v
+-----------------------+
| Web Server (API) |
| (Node.js, Python, etc.) |
+-----------------------+
|
v
+-----------------------------+
| Database |
| (MySQL, MongoDB, etc.) |
+-----------------------------+
Best Practices in Web Application Design
1. Keep it simple: Simplicity enhances usability and maintainability.
2. Prioritize security: Ensure the application is designed with security in mind.
3. Use modern frameworks: Leverage front-end and back-end frameworks for fast and scalable development.
4. Optimize for performance: Use caching, CDNs, and lazy loading to improve load times.
5. Ensure responsive design: The application must work well on both desktop and mobile devices.
Conclusion
Web Application Design is a multidisciplinary process that requires collaboration between UI/UX designers,
front-end developers, back-end developers, and database engineers. A successful web application is one that
is responsive, secure, scalable, and meets both user and business needs effectively. Following a structured
design approach ensures that the application is reliable, user-friendly, and able to evolve as user needs and
technology change.
Unit - 3 Software Coding & Testing Coding Standard and coding Guidelines, Code
Review, Software Documentation, Testing Strategies, Testing Techniques and Test
Case, Test Suites Design, Testing Conventional Applications, Testing Object Oriented
Applications, Testing Web and Mobile Applications, Testing Tools (Win runner, Load
runner). Quality Concepts and Software Quality Assurance, Software Reviews
(Formal Technical Reviews), Software Reliability, The Quality Standards: ISO 9000,
CMM, Six Sigma for SE, SQA Plan

Software Testing Tutorial


Software testing tutorial provides basic and advanced concepts of software testing. Our software testing
tutorial is designed for beginners and professionals.
Software testing is widely used technology because it is compulsory to test each and every software before
deployment.
Our Software testing tutorial includes all topics of Software testing such as Methods such as Black Box
Testing, White Box Testing, Visual Box Testing and Gray Box Testing. Levels such as Unit Testing, Integration
Testing, Regression Testing, Functional Testing. System Testing, Acceptance Testing, Alpha Testing, Beta
Testing, Non-Functional testing, Security Testing, Portability Testing.
What is Software Testing
Software testing is a process of identifying the correctness of software by considering its all attributes
(Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution of software
components to find the software bugs or errors or defects.

Software testing provides an independent view and objective of the software and gives surety of fitness of
the software. It involves testing of all components under the required services to confirm that whether it is
satisfying the specified requirements or not. The process is also providing the client with information about
the quality of the software.
Testing is mandatory because it will be a dangerous situation if the software fails any of time due to lack of
testing. So, without testing software cannot be deployed to the end user.
What is Testing
Testing is a group of techniques to determine the correctness of the application under the predefined script
but, testing cannot find all the defect of application. The main intent of testing is to detect failures of the
application so that failures can be discovered and corrected. It does not demonstrate that a product
functions properly under all conditions but only that it is not working in some specific conditions.
Testing furnishes comparison that compares the behavior and state of software against mechanisms because
the problem can be recognized by the mechanism. The mechanism may include past versions of the same
specified product, comparable products, and interfaces of expected purpose, relevant standards, or other
criteria but not limited up to these.
Testing includes an examination of code and also the execution of code in various environments, conditions
as well as all the examining aspects of the code. In the current scenario of software development, a testing
team may be separate from the development team so that Information derived from testing can be used to
correct the process of software development.
The success of software depends upon acceptance of its targeted audience, easy graphical user interface,
strong functionality load test, etc. For example, the audience of banking is totally different from the audience
of a video game. Therefore, when an organization develops a software product, it can assess whether the
software product will be beneficial to its purchasers and other audience.
Type of Software testing
We have various types of testing available in the market, which are used to test the application or the
software.
With the help of below image, we can easily understand the type of software testing:

Manual testing
The process of checking the functionality of an application as per the customer needs without taking any
help of automation tools is known as manual testing. While performing the manual testing on any
application, we do not need any specific knowledge of any testing tool, rather than have a proper
understanding of the product so we can easily prepare the test document.
Manual testing can be further divided into three types of testing, which are as follows:
o White box testing
o Black box testing
o Gray box testing
For more information about manual testing, refers to the below link:
https://www.javatpoint.com/manual-testing
Automation testing
Automation testing is a process of converting any manual test cases into the test scripts with the help of
automation tools, or any programming language is known as automation testing. With the help of
automation testing, we can enhance the speed of our test execution because here, we do not require any
human efforts. We need to write a test script and execute those scripts.
For more information about manual testing, refers to the below link:
https://www.javatpoint.com/automation-testing
Prerequisite
Before learning software testing, you should have basic knowledge of basic computer functionality, basic
mathematics, computer language, and logical operators.
Audience
Our software testing tutorial is designed for beginners and professionals.
Problems
We assure that you will not find any problem in this Software Testing Tutorial. But if there is any mistake,
please post the problem in contact form.

• Coding Standard and coding Guidelines in software engineering


Copilot
Coding standards and guidelines are the backbone of clean, maintainable, and consistent code. They ensure
everyone on a team is on the same page, making the code easier to read, understand, and debug.
Coding standards are like the grammar rules of programming. They dictate how code should be written in ter
ms of formatting, naming conventions, and more. For example, deciding whether to use camelCase or snake_
case for variables.
Coding guidelines, on the other hand, are more about best practices and principles. They cover things like th
e importance of code comments, how to handle exceptions, and designing for performance and security.
Sticking to both coding standards and guidelines can drastically improve software quality and make life easier
for your future self (and anyone else who touches the code). Here's a toast to fewer headaches and more ro
bust programs.

• Code Review

Code reviews are like the quality control checkpoint in software engineering. Before code gets merged into th
e main project, another set of eyes looks it over. The goal is to catch bugs, ensure code quality, and share kno
wledge among team members.
Why it's essential:
1. Catch Errors Early: Identifies issues before they become major problems.
2. Improve Code Quality: Encourages best practices and adherence to coding standards.
3. Knowledge Sharing: Helps team members learn from each other and understand different parts of the codeb
ase.
4. Collaborative Culture: Promotes teamwork and collaboration.
The process usually involves developers submitting their code changes (pull requests) and reviewers examini
ng the code, providing feedback, and suggesting improvements. Once the code meets the required standards
, it gets approved and merged.

• Software Documentation
Software documentation is the unsung hero of the development process. It's what makes code understandab
le, maintainable, and usable long after it's been written.
There are a few key types:
1. Code Documentation: Inline comments and descriptions that explain what specific blocks of code do. Think o
f it as a guide for anyone who dives into the code later.
2. Technical Documentation: Detailed explanations of how the system works, including architecture diagrams,
API references, and database schema.
3. User Documentation: Manuals, guides, and help files that explain how to use the software. This is often aime
d at end users or customers.
Good documentation ensures that everyone from developers to end-
users can understand and use the software effectively. It's like the breadcrumbs that keep everyone from gett
ing lost in the forest of code.
• Testing Strategies
Testing strategies are critical to ensuring that software performs as expected and is free from defects. They c
over a range of approaches to validate everything from individual units of code to the system as a whole.
1. Unit Testing: Testing individual components or functions in isolation. It's like zooming in on a single piece of t
he puzzle to make sure it fits perfectly.
2. Integration Testing: Ensuring that different components or systems work together as expected. Think of it as
checking that puzzle pieces fit together seamlessly.
3. System Testing: Testing the complete and integrated software to verify it meets the requirements. It's like loo
king at the entire puzzle to ensure it forms the correct picture.
4. Acceptance Testing: Validating the software against user requirements and ensuring it provides the intended
value. This is like having someone who ordered the puzzle verify that it's the one they wanted.
5. Performance Testing: Assessing how the software performs under various conditions, such as load, stress, an
d scalability. It's like making sure the puzzle can withstand a bit of rough handling and still look good.
6. Security Testing: Identifying vulnerabilities and ensuring the software is secure against potential threats. It's li
ke checking that the puzzle has no missing pieces or flaws that could compromise its integrity.

• Testing Techniques and Test Case

Testing techniques and creating effective test cases are the bread and butter of ensuring software quality.
Testing Techniques:
1. Black Box Testing: Testing without looking at the internal code. Focus is on input and output. It's like testing a
car by driving it, without peeking under the hood.
2. White Box Testing: Testing with full knowledge of the internal code. It's like diving deep into the engine of th
e car to check every component.
3. Grey Box Testing: A mix of both, where some knowledge of the internal workings is available. It's like knowin
g the car's design but focusing mainly on its performance.
4. Exploratory Testing: No predefined cases, testers explore the software to find bugs. Think of it as freestyle dri
ving to find unexpected issues.
5. Regression Testing: Ensuring new code changes don’t break existing functionality. It's like rechecking the car'
s performance after adding a new part.
Test Case: A test case is a set of conditions or variables used to determine whether a system meets requirem
ents and works correctly. Good test cases are specific, repeatable, and cover both positive and negative scen
arios.
Key components of a test case include:
• Test Case ID: Unique identifier.
• Description: A brief summary of what’s being tested.
• Preconditions: Any setup needed before executing the test.
• Test Steps: Step-by-step instructions to execute the test.
• Expected Result: What should happen if everything works correctly.
• Actual Result: What actually happens when the test is executed.
• Status: Pass or fail based on whether the actual result matches the expected result.

• Test Suites Design

Software testing enables verification of every aspect and feature of the software. This often leads to the
development of a large number of test cases. As the count of test cases increases, they are mismanaged, and
end up becoming unorganized. A software test suite prevents such a situation from occurring.
What is a Software Test Suite?
A test suite is a methodical arrangement of test cases that are developed to validate specific functionalities.
Individual test cases in a suite are created to verify a particular functionality or performance goal. All the test
cases in a test suite ultimately used to verify the quality, and dependability of a software.

What is Software Test Suite Composed of?


A test suite is composed of the items listed below −
• Test Cases − They describe the particular input situations, steps to execute, expected results to test a specific
feature of the software.
• Test Scripts − They describe a group of automated sequences of commands that are needed to execute a test
case. They can be developed using multiple languages, and are utilized to automate the testing activities.
• Test Data − They constitute the set of inputs required at the time of test execution. They play a very
important role in verifying multiple scenarios, and situations.

• Testing Conventional Applications


Conventional testing refers to the traditional approach of software testing that has been widely used for
several decades. This approach involves a series of activities that aim to identify defects or errors in a
software product and ensure that the software meets the specified requirements and performs as expected.

Stages in Conventional testing


The conventional testing process typically follows a structured and sequential approach that consists of
several stages, starting with the planning phase and ending with the release of the software. The following
are the key stages involved in conventional testing:
1. Planning: This stage involves defining the testing objectives, creating test plans, and identifying the testing
resources needed to carry out the testing activities.
2. Requirements analysis: In this stage, the software requirements are analysed to determine the scope of
testing, identify potential risks, and develop test cases.
3. Design: In this stage, the test cases are designed to validate the software functionality and ensure that the
software meets the specified requirements.
4. Execution: This stage involves running the test cases and reporting defects or errors found in the software.
5. Reporting: This stage involves documenting the test results, including the defects or errors found, and
presenting them to the development team for fixing.
6. Retesting: This stage involves rerunning the test cases to ensure that the defects or errors found have been
fixed and that the software now meets the specified requirements.
7. Release: In this stage, the software is released for use by the end-users after ensuring that it meets the
specified quality standards.
The above stages are typically carried out by a dedicated team of software testers who are responsible for
ensuring the quality of the software product. Conventional testing typically focuses on functional testing,
which involves testing the software's behaviour against the specified requirements.
Types of Conventional Testing
There are various types of conventional testing techniques that can be used during the testing process. Some
of the commonly used techniques include:
1. Unit testing: This involves testing individual modules or components of the software to ensure that they
perform as expected.
2. Integration testing: This involves testing the software modules in combination to ensure that they work
together correctly.
3. System testing: System testing is a type of testing that verifies a software product's integration and
completion. A system test's objective is to gauge how well the system requirements are met from beginning
to end. In most cases, a bigger computer-based system merely consists of a small portion of the software.
4. Acceptance testing: This involves testing the software from the end-user's perspective to ensure that it
meets their needs and requirements.
5. Regression testing: This involves rerunning previously executed test cases to ensure that the changes made
to the software have not introduced new defects or errors.
Advantages of Conventional Testing
Conventional testing, also known as manual testing, involves testing software applications by human testers
without the use of automated tools. Some advantages of conventional testing include:
1. Flexibility: Manual testing allows for greater flexibility in terms of the types of tests that can be conducted,
as testers can easily adapt to changes in requirements and adjust their test cases accordingly.
2. Human intuition: Humans possess an intuition that allows them to detect issues that automated testing tools
may miss. Manual testers can use their experience and instincts to uncover defects that would otherwise go
unnoticed.
3. Cost-effective: Manual testing can be less expensive than automated testing, especially for smaller projects
or organizations that may not have the resources to invest in automated testing tools.
4. Better understanding of user experience: Manual testers can better simulate the user experience by testing
the application in a more real-world setting, which can lead to a better understanding of how users will
interact with the application.
5. Testing of non-functional requirements: Manual testing can also be used to test non-functional
requirements, such as usability, accessibility, and performance, which may be difficult to automate.
6. Better communication: Manual testing can promote better communication between testers, developers, and
other stakeholders as they work together to identify and resolve issues.
Disadvantages of Conventional Testing
While conventional testing or manual testing has its advantages, it also has some drawbacks. Some of the
disadvantages of conventional testing are:
1. Time-consuming: Manual testing can be time-consuming, especially for larger applications. Testing each
functionality manually can take a lot of time, which can delay the overall project delivery.
2. Limited coverage: Manual testing may not cover all possible test scenarios or paths. Testers may miss some
test cases due to human error, leading to defects that go unnoticed.
3. Subjective: Manual testing can be subjective as testers may have their own opinions and biases, which can
affect the quality of the testing. This can lead to inconsistencies in the testing process and make it difficult to
reproduce issues.
4. Costly in the long run: Although manual testing can be cost-effective in the short run, it can become
expensive in the long run as the project grows. The cost of hiring and training testers, as well as the cost of
manual testing tools, can add up over time.
5. Repetitive: Testing the same functionality repeatedly can be tedious and monotonous for testers, leading to
boredom and reduced productivity. This can result in a lower quality of testing.
6. Error-prone: Manual testing can be error-prone as it relies on the accuracy and consistency of human testers.
Testers may miss some defects or make errors in the testing process, which can lead to quality issues.

----------------------------------------------------------------------------------------------------------------
You've just hit the motherlode of software testing and quality assurance! Let's break this down.
Testing Object-Oriented Applications
Testing object-oriented applications focuses on classes and objects, and involves:
• Unit Testing: Testing individual methods and classes.
• Integration Testing: Ensuring that different parts of the system work together.
• Polymorphism and Inheritance Tests: Making sure that derived classes work correctly when inherited metho
ds are called.
Testing Web and Mobile Applications
Web and mobile app testing ensures that the applications perform well across different platforms and device
s.
• Web Testing: Includes functionality, usability, compatibility, performance, and security testing.
• Mobile Testing: Covers various aspects like screen size, operating system, network conditions, and battery life
.
Testing Tools
• WinRunner: An automated functional GUI testing tool that allows the creation and execution of tests based o
n user actions.
• LoadRunner: A performance testing tool for examining system behavior and performance under load. It simu
lates multiple users accessing the application to identify and troubleshoot issues.
Quality Concepts and Software Quality Assurance (SQA)
• Quality Concepts: Focus on preventing defects by ensuring processes are followed.
• SQA: Involves systematic activities to ensure the software meets the required quality standards. This includes
audits, process standards, and testing strategies.
Software Reviews (Formal Technical Reviews)
Formal technical reviews (FTRs) are structured processes involving team members examining the software pr
oduct to identify defects. These are planned, documented, and typically follow a strict protocol.
Software Reliability
Software reliability is about ensuring that software performs correctly under specified conditions over time. I
t involves:
• Fault Tolerance: Ability to continue operation despite faults.
• Availability: Ensuring the system is operational when needed.
• Recovery: Ability to recover from failures quickly.

• Quality Standards: ISO 9000, CMM, Six Sigma for SE, SQA Plan
ISO 9000 Certification
ISO (International Standards Organization) is a group or consortium of 63 countries established to plan and
fosters standardization. ISO declared its 9000 series of standards in 1987. It serves as a reference for the
contract between independent parties. The ISO 9000 standard determines the guidelines for maintaining a
quality system. The ISO standard mainly addresses operational methods and organizational methods such as
responsibilities, reporting, etc. ISO 9000 defines a set of guidelines for the production process and is not
directly concerned about the product itself.
Types of ISO 9000 Quality Standards
The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for production,
then good quality products are bound to follow automatically. The types of industries to which the various
ISO standards apply are as follows.
1. ISO 9001: This standard applies to the organizations engaged in design, development, production, and
servicing of goods. This is the standard that applies to most software development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but are only involved in
the production. Examples of these category industries contain steel and car manufacturing industries that
buy the product and plants designs from external sources and are engaged in only manufacturing those
products. Therefore, ISO 9002 does not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation and testing of the
products. For example, Gas companies.
How to get ISO 9000 Certification?
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for registration. The
process consists of the following stages:

1. Application: Once an organization decided to go for ISO certification, it applies to the registrar for
registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the document submitted
by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has compiled the
suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.

• CMM

Software Engineering Institute Capability Maturity Model (SEICMM)


The Capability Maturity Model (CMM) is a procedure used to develop and refine an organization's software
development process.
The model defines a five-level evolutionary stage of increasingly organized and consistently more mature
processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and
development center promote by the U.S. Department of Defense (DOD).
Capability Maturity Model is used as a benchmark to measure the maturity of an organization's software
process.
Methods of SEICMM
There are two methods of SEICMM:
Capability Evaluation: Capability evaluation provides a way to assess the software process capability of an
organization. The results of capability evaluation indicate the likely contractor performance if the contractor
is awarded a work. Therefore, the results of the software process capability assessment can be used to select
a contractor.
Software Process Assessment: Software process assessment is used by an organization to improve its
process capability. Thus, this type of evaluation is for purely internal use.
SEI CMM categorized software development industries into the following five maturity levels. The various
levels of SEI CMM have been designed so that it is easy for an organization to build its quality system starting
from scratch slowly.

Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no processes are
described and followed. Since software production processes are not limited, different engineers follow their
process and as a result, development efforts become chaotic. Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are established.
Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and documented.
There is a common organization-wide understanding of operations, roles, and responsibilities. The ways
through defined, the process and product qualities are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size, reliability, time
complexity, understandability, etc.

• Six Sigma
Six Sigma is the process of improving the quality of the output by identifying and eliminating the cause of
defects and reduce variability in manufacturing and business processes. The maturity of a manufacturing
process can be defined by a sigma rating indicating its percentage of defect-free products it creates. A six
sigma method is one in which 99.99966% of all the opportunities to produce some features of a component
are statistically expected to be free of defects (3.4 defective features per million opportunities).

History of Six Sigma


Six-Sigma is a set of methods and tools for process improvement. It was introduced by Engineer Sir Bill
Smith while working at Motorola in 1986. In the 1980s, Motorola was developing Quasar televisions which
were famous, but the time there was lots of defects which came up on that due to picture quality and sound
variations.
By using the same raw material, machinery and workforce a Japanese form took over Quasar television
production, and within a few months, they produce Quasar TV's sets which have fewer errors. This was
obtained by improving management techniques.
Six Sigma was adopted by Bob Galvin, the CEO of Motorola in 1986 and registered as a Motorola Trademark
on December 28, 1993, then it became a quality leader.
Characteristics of Six Sigma
The Characteristics of Six Sigma are as follows:
1. Statistical Quality Control: Six Sigma is derived from the Greek Letter σ (Sigma) from the Greek alphabet,
which is used to denote Standard Deviation in statistics. Standard Deviation is used to measure variance,
which is an essential tool for measuring non-conformance as far as the quality of output is concerned.
2. Methodical Approach: The Six Sigma is not a merely quality improvement strategy in theory, as it features a
well defined systematic approach of application in DMAIC and DMADV which can be used to improve the
quality of production. DMAIC is an acronym for Design-Measure- Analyze-Improve-Control. The alternative
method DMADV stands for Design-Measure- Analyze-Design-Verify.
3. Fact and Data-Based Approach: The statistical and methodical aspect of Six Sigma shows the scientific basis
of the technique. This accentuates essential elements of the Six Sigma that is a fact and data-based.
4. Project and Objective-Based Focus: The Six Sigma process is implemented for an organization's project
tailored to its specification and requirements. The process is flexed to suits the requirements and conditions
in which the projects are operating to get the best results.
5. Customer Focus: The customer focus is fundamental to the Six Sigma approach. The quality improvement
and control standards are based on specific customer requirements.
6. Teamwork Approach to Quality Management: The Six Sigma process requires organizations to get organized
when it comes to controlling and improving quality. Six Sigma involving a lot of training depending on the
role of an individual in the Quality Management team.
Six Sigma Methodologies
Six Sigma projects follow two project methodologies:
1. DMAIC
2. DMADV

DMAIC
It specifies a data-driven quality strategy for improving processes. This methodology is used to enhance an
existing business process.
The DMAIC project methodology has five phases:

1. Define: It covers the process mapping and flow-charting, project charter development, problem-solving
tools, and so-called 7-M tools.
2. Measure: It includes the principles of measurement, continuous and discrete data, and scales of
measurement, an overview of the principle of variations and repeatability and reproducibility (RR) studies for
continuous and discrete data.
3. Analyze: It covers establishing a process baseline, how to determine process improvement goals, knowledge
discovery, including descriptive and exploratory data analysis and data mining tools, the basic principle of
Statistical Process Control (SPC), specialized control charts, process capability analysis, correlation and
regression analysis, analysis of categorical data, and non-parametric statistical methods.
4. Improve: It covers project management, risk assessment, process simulation, and design of experiments
(DOE), robust design concepts, and process optimization.
5. Control: It covers process control planning, using SPC for operational control and PRE-Control.
DMADV
It specifies a data-driven quality strategy for designing products and processes. This method is used to create
new product designs or process designs in such a way that it results in a more predictable, mature, and
detect free performance.

• SQA Plan

Software Quality Assurance


What is Quality?
Quality defines to any measurable characteristics such as correctness, maintainability, portability, testability,
usability, reliability, efficiency, integrity, reusability, and interoperability.
There are two kinds of Quality:

Quality of Design: Quality of Design refers to the characteristics that designers specify for an item. The grade
of materials, tolerances, and performance specifications that all contribute to the quality of design.
Quality of conformance: Quality of conformance is the degree to which the design specifications are
followed during manufacturing. Greater the degree of conformance, the higher is the level of quality of
conformance.
Software Quality: Software Quality is defined as the conformance to explicitly state functional and
performance requirements, explicitly documented development standards, and inherent characteristics that
are expected of all professionally developed software.
Quality Control: Quality Control involves a series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements place upon it. Quality control
includes a feedback loop to the process that created the work product.
Quality Assurance: Quality Assurance is the preventive set of activities that provide greater confidence that
the project will be completed successfully.
Quality Assurance focuses on how the engineering and management activity will be done?
As anyone is interested in the quality of the final product, it should be assured that we are building the right
product.
It can be assured only when we do inspection & review of intermediate products, if there are any bugs, then
it is debugged. This quality can be enhanced.
Importance of Quality
We would expect the quality to be a concern of all producers of goods and services. However, the distinctive
characteristics of software and in particular its intangibility and complexity, make special demands.
Increasing criticality of software: The final customer or user is naturally concerned about the general quality
of software, especially its reliability. This is increasing in the case as organizations become more dependent
on their computer systems and software is used more and more in safety-critical areas. For example, to
control aircraft.
The intangibility of software: This makes it challenging to know that a particular task in a project has been
completed satisfactorily. The results of these tasks can be made tangible by demanding that the developers
produce 'deliverables' that can be examined for quality.
Accumulating errors during software development: As computer system development is made up of several
steps where the output from one level is input to the next, the errors in the earlier ?deliverables? will be
added to those in the later stages leading to accumulated determinable effects. In general the later in a
project that an error is found, the more expensive it will be to fix. In addition, because the number of errors
in the system is unknown, the debugging phases of a project are particularly challenging to control.
Software Quality Assurance
Software quality assurance is a planned and systematic plan of all actions necessary to provide adequate
confidence that an item or product conforms to establish technical requirements.
A set of activities designed to calculate the process by which the products are developed or manufactured.
SQA Encompasses
o A quality management approach
o Effective Software engineering technology (methods and tools)
o Formal technical reviews that are tested throughout the software process
o A multitier testing strategy
o Control of software documentation and the changes made to it.
o A procedure to ensure compliances with software development standards
o Measuring and reporting mechanisms.
Unit - 4 Software Maintenance and Configuration Management Types of Software
Maintenance,The SCM Process,Identification of Objects in the Software
Configuration, DevOps: Overview, Problem Case Definition, Benefits of Fixing
Application Development Challenges, DevOps Adoption Approach through
Assessment, Solution Dimensions, What is DevOps?, DevOps Importance and
Benefits, DevOps Principles and Practices, 7 C’s of DevOps Lifecycle for Business
Agility, DevOps and Continuous Testing, How to Choose Right DevOps Tools,
Challenges with DevOps Implementation, Must Do Things for DevOps, Mapping My
App to DevOps –

Software Configuration Management


When we develop software, the product (software) undergoes many changes in their maintenance phase; we
need to handle these changes effectively.
Several individuals (programs) works together to achieve these common goals. This individual produces
several work product (SC Items) e.g., Intermediate version of modules or test data used during debugging,
parts of the final product.
The elements that comprise all information produced as a part of the software process are collectively called
a software configuration.
As software development progresses, the number of Software Configuration elements (SCI's) grow rapidly.
These are handled and controlled by SCM. This is where we require software configuration management.
A configuration of the product refers not only to the product's constituent but also to a particular version of
the component.
Therefore, SCM is the discipline which
o Identify change
o Monitor and control change
o Ensure the proper implementation of change made to the item.
o Auditing and reporting on the change made.
Configuration Management (CM) is a technic of identifying, organizing, and controlling modification to
software being built by a programming team.
The objective is to maximize productivity by minimizing mistakes (errors).
CM is used to essential due to the inventory management, library management, and updation management
of the items essential for the project.
Why do we need Configuration Management?
Multiple people are working on software which is consistently updating. It may be a method where multiple
version, branches, authors are involved in a software project, and the team is geographically distributed and
works concurrently. It changes in user requirements, and policy, budget, schedules need to be
accommodated.
Importance of SCM
It is practical in controlling and managing the access to various SCIs e.g., by preventing the two members of a
team for checking out the same component for modification at the same time.
It provides the tool to ensure that changes are being properly implemented.
It has the capability of describing and storing the various constituent of software.
SCM is used in keeping a system in a consistent state by automatically producing derived version upon
modification of the same component.

-------------------------------------------------------------------------------------------------
What is DevOps?

If you want to build better software faster, DevOps is the answer. Here’s how this software development
methodology brings everyone to the table to create secure code quickly.
DevOps defined

DevOps combines development (Dev) and operations (Ops) to increase the efficiency, speed, and security of
software development and delivery compared to traditional processes. A more nimble software development
lifecycle results in a competitive advantage for businesses and their customers.

DevOps explained

DevOps can be best explained as people working together to conceive, build and deliver secure software at
top speed. DevOps practices enable software development (dev) and operations (ops) teams to accelerate
delivery through automation, collaboration, fast feedback, and iterative improvement.Stemming from
an Agile approach to software development, a DevOps process expands on the cross-functional approach of
building and shipping applications in a faster and more iterative manner.
In adopting a DevOps development process, you are making a decision to improve the flow and value
delivery of your application by encouraging a more collaborative environment at all stages of the
development cycle.DevOps represents a change in mindset for IT culture. In building on top of Agile, lean
practices, and systems theory, DevOps focuses on incremental development and rapid delivery of software.
Success relies on the ability to create a culture of accountability, improved collaboration, empathy, and joint
responsibility for business outcomes.
DevOps is a combination of software development (dev) and operations (ops). It is defined as a software
engineering methodology which aims to integrate the work of development teams and operations teams by
facilitating a culture of collaboration and shared responsibility.
DevOps methodology

The DevOps methodology aims to shorten the systems development lifecycle and provide continuous
delivery with high software quality. It emphasizes collaboration, automation, integration and rapid feedback
cycles. These characteristics help ensure a culture of building, testing, and releasing software that is more
reliable and at a high velocity.
This methodology comprises four key principles that guide the effectiveness and efficiency of application
development and deployment. These principles, listed below, center on the best aspects of modern software
development.
Core DevOps principles
1. Automation of the software development lifecycle. This includes automating testing, builds, releases, the
provisioning of development environments, and other manual tasks that can slow down or introduce human
error into the software delivery process.
2. Collaboration and communication. A good DevOps team has automation, but a great DevOps team also has
effective collaboration and communication.
3. Continuous improvement and minimization of waste. From automating repetitive tasks to watching
performance metrics for ways to reduce release times or mean-time-to-recovery, high performing DevOps
teams are regularly looking for areas that could be improved.
4. Hyperfocus on user needs with short feedback loops. Through automation, improved communication and
collaboration, and continuous improvement, DevOps teams can take a moment and focus on what real users
really want, and how to give it to them.
By adopting these principles, organizations can improve code quality, achieve a faster time to market, and
engage in better application planning.
The four phases of DevOps

The evolution of DevOps has unfolded across four distinct phases, each marked by shifts in technology and
organizational practices. This progression reflects the growing complexity within DevOps, driven primarily by
two key trends:
1. Transition to Microservices: As organizations shift from monolithic architectures to more
flexible microservices architectures, the demand for specialized DevOps tools has surged. This shift aims to
accommodate the increased granularity and agility offered by microservices.
2. Increase in Tool Integration: The proliferation of projects and the corresponding need for more DevOps tools
have led to a significant rise in the number of integrations between projects and tools. This complexity has
prompted organizations to rethink their approach to adopting and integrating DevOps tools.
The evolution of DevOps has unfolded through four distinct phases, each addressing the growing demands
and complexities of software development and delivery.
This four phases are as follows:
Phase 1: Bring Your Own DevOps (BYOD)
In the Bring Your Own DevOps phase, each team selected its own tools. This approach caused problems
when teams attempted to work together because they were not familiar with the tools of other teams. This
phase highlighted the need for a more unified toolset to facilitate smoother team integration and project
management.
Phase 2: Best-in-class DevOps
To address the challenges of using disparate tools, organizations moved to the second phase, Best-in-class
DevOps. In this phase, organizations standardized on the same set of tools, with one preferred tool for each
stage of the DevOps lifecycle. It helped teams collaborate with one another, but the problem then became
moving software changes through the tools for each stage.
Phase 3: Do-it-yourself (DIY) DevOps
To remedy this problem, organizations adopted do-it-yourself (DIY) DevOps, building on top of and between
their tools. They performed a lot of custom work to integrate their DevOps point solutions together.
However, since these tools were developed independently without integration in mind, they never fit quite
right. For many organizations, maintaining DIY DevOps was a significant effort and resulted in higher costs,
with engineers maintaining tooling integration rather than working on their core software product.
Phase 4: DevOps Platform
A single-application platform approach improves the team experience and business efficiency. A DevOps
platform replaces DIY DevOps, allowing visibility throughout and control over all stages of the DevOps
lifecycle.
By empowering all teams – Development, Operations, IT, Security, and Business – to collaboratively plan,
build, secure, and deploy software across an end-to-end unified system, a DevOps platform represents a
fundamental step-change in realizing the full potential of DevOps.
GitLab's DevOps platform is a single application powered by a cohesive user interface, agnostic of self-
managed or SaaS deployment. It is built on a single codebase with a unified data store, that allows
organizations to resolve the inefficiencies and vulnerabilities of an unreliable DIY toolchain.
How DevOps can benefit from AI and ML?
Artificial intelligence (AI) and machine learning (ML) are still maturing in their applications for DevOps, but
there is plenty for organizations to take advantage of today. They assist in analyzing test data, identifying
coding anomalies that could lead to bugs, as well as automating security and performance monitoring to
detect and proactively mitigate potential issues.
• AI and ML can find patterns, figure out the coding problems that cause bugs, and alert DevOps teams so they
can dig deeper.
• Similarly, DevOps teams can use AI and ML to sift through security data from logs and other tools to detect
breaches, attacks, and more. Once these issues are found, AI and ML can respond with automated mitigation
techniques and alerting.
• AI and ML can save developers and operations professionals time by learning how they work best, making
suggestions within workflows, and automatically provisioning preferred infrastructure configurations.
AI and ML excel in parsing vast amounts of test and security data, identifying patterns and coding anomalies
that could lead to potential bugs or breaches. This capability enables DevOps teams to proactively address
vulnerabilities and streamline alerting processes.
Read more about the benefits of AI and ML for DevOps
What is a DevOps platform?

DevOps brings the human silos together and a DevOps platform does the same thing for tools. Many teams
start their DevOps journey with a disparate collection of tools, all of which have to be maintained and many
of which don’t or can’t integrate. A DevOps platform brings tools together in a single application for
unparalleled collaboration, visibility, and development velocity.
A DevOps platform is how modern software should be created, secured, released, and monitored in a
repeatable fashion. A true DevOps platform means teams can iterate faster and innovate together because
everyone can contribute. This integrated approach is pivotal for organizations looking to navigate the
complexities of modern software development and realize the full potential of DevOps.
Benefits of a DevOps culture

The business value of DevOps and the benefits of a DevOps culture lies in the ability to improve the
production environment in order to deliver software faster with continuous improvement. You need the
ability to anticipate and respond to industry disruptors without delay. This becomes possible within an Agile
software development process where teams are empowered to be autonomous and deliver faster, reducing
work in progress. Once this occurs, teams are able to respond to demands at the speed of the market.
There are some fundamental concepts that need to be put into action in order for DevOps to function as
designed, including the need to:
• Remove institutionalized silos and handoffs that lead to roadblocks and constraints, particularly in instances
where the measurements of success for one team is in direct odds with another team’s key performance
indicators (KPIs).
• Implement a unified tool chain using a single application that allows multiple teams to share and collaborate.
This will enable teams to accelerate delivery and provide fast feedback to one another.
Key benefits:
Adopting a DevOps culture brings numerous benefits to an organization, notably in operational efficiency,
faster delivery of features, and improved product quality. Key advantages include:
Enhanced Collaboration: Breaking down silos between development and operations teams fosters a more
cohesive working environment, leading to better communication and collaboration.
Increased Efficiency: Automation of the software development lifecycle reduces manual tasks, minimizes
errors, and accelerates delivery times.
Continuous Improvement: DevOps encourages a culture of continuous feedback, allowing teams to quickly
adapt and make improvements, ensuring that the software meets user needs effectively.
Higher Quality and Security: With practices like continuous integration and delivery (CI/CD) and proactive
security measures, DevOps ensures that the software is not only developed faster but also maintains high
quality and security standards.
Faster Time to Market: By streamlining development processes and improving team collaboration,
organizations can reduce the overall time from conception to deployment, offering a competitive edge in
rapidly evolving markets.
What is the goal of DevOps?

DevOps represents a change in mindset for IT culture. In building on top of Agile practices, DevOps focuses
on incremental development and rapid delivery of software. Success relies on the ability to create a culture
of accountability, improved collaboration, empathy, and joint responsibility for business outcomes.
Adopting a DevOps strategy enables businesses to increase operational efficiencies, deliver better products
faster, and reduce security and compliance risk.
The DevOps lifecycle and how DevOps works

The DevOps lifecyle stretches from the beginning of software development through to delivery, maintenance,
and security. The stages of the DevOps lifecycle are:
Plan: Organize the work that needs to be done, prioritize it, and track its completion.
Create: Write, design, develop and securely manage code and project data with your team.
Verify: Ensure that your code works correctly and adheres to your quality standards — ideally with
automated testing.
Package: Package your applications and dependencies, manage containers, and build artifacts.
Secure: Check for vulnerabilities through static and dynamic tests, fuzz testing, and dependency scanning.
Release: Deploy the software to end users.
Configure: Manage and configure the infrastructure required to support your applications.
Monitor: Track performance metrics and errors to help reduce the severity and frequency of incidents.
Govern: Manage security vulnerabilities, policies, and compliance across your organization.
DevOps tools, concepts and fundamentals

DevOps covers a wide range of practices across the application lifecycle. Teams often start with one or more
of these practices in their journey to DevOps success.

Topic Description

The fundamental practice of tracking and managing every change made to


Version control source code and other files. Version control is closely related to source code
management.

Agile development means taking iterative, incremental, and lean approaches


Agile to streamline and accelerate the delivery of projects.

Continuous The practice of regularly integrating all code changes into the main branch,
Integration (CI) automatically testing each change, and automatically kicking off a build.
Topic Description

Continuous delivery works in conjunction with continuous integration to


Continuous automate the infrastructure provisioning and application release process.
Delivery (CD) They are commonly referred to together as CI/CD.

A term for shifting security and testing much earlier in the development
Shift left process. Doing this can help speed up development while simultaneously
improving code quality.

How does DevSecOps relate to DevOps?

Security has become an integral part of the software development lifecycle, with much of the security
shifting left in the development process. DevSecOps ensures that DevOps teams understand the security and
compliance requirements from the very beginning of application creation and can properly protect the
integrity of the software.
By integrating security seamlessly into DevOps workflows, organizations gain the visibility and control
necessary to meet complex security demands, including vulnerability reporting and auditing. Security teams
can ensure that policies are being enforced throughout development and deployment, including critical
testing phases.
DevSecOps can be implemented across an array of environments such as on-premises, cloud-native, and
hybrid, ensuring maximum control over the entire software development lifecycle.
How are DevOps and CI/CD related?

CI/CD — the combination of continuous integration and continuous delivery — is an essential part of DevOps
and any modern software development practice. A purpose-built CI/CD platform can maximize development
time by improving an organization’s productivity, increasing efficiency, and streamlining workflows through
built-in automation, continuous testing, and collaboration.
As applications grow larger, the features of CI/CD can help decrease development complexity. Adopting other
DevOps practices — like shifting left on security and creating tighter feedback loops — helps break down
development silos, scale safely, and get the most out of CI/CD.
How does DevOps support the cloud-native approach?

Moving software development to the cloud has so many advantages that more and more companies are
adopting cloud-native computing. Building, testing, and deploying applications from the cloud saves money
because organizations can scale resources more easily, support faster software shipping, align with business
goals, and free up DevOps teams to innovate rather than maintain infrastructure.
Cloud-native application development enables developers and operations teams to work more
collaboratively, which results in better software delivered faster.
Read more about the benefits of cloud-native DevOps environments
What is a DevOps engineer?

A DevOps engineer is responsible for all aspects of the software development lifecycle, including
communicating critical information to the business and customers. Adhering to DevOps methodologies and
principles, they efficiently integrate development processes into workflows, introduce automation where
possible, and test and analyze code. They build, evaluate, deploy, and update tools and platforms (including
IT infrastructure if necessary). DevOps engineers manage releases, as well as identify and help resolve
technical issues for software users.
DevOps engineers require knowledge of a range of programming languages and a strong set of
communication skills to be able to collaborate among engineering and business groups.
Benefits of DevOps

Adopting DevOps breaks down barriers so that development and operations teams are no longer siloed and
have a more efficient way to work across the entire development and application lifecycle. Without DevOps,
organizations often experience handoff friction, which delays the delivery of software releases and negatively
impacts business results.
The DevOps model is an organization’s answer to increasing operational efficiency, accelerating delivery, and
innovating products. Organizations that have implemented a DevOps culture experience the benefits of
increased collaboration, fluid responsiveness, and shorter cycle times.
Collaboration
Adopting a DevOps model creates alignment between development and operations teams; handoff friction is
reduced and everyone is all in on the same goals and objectives.
Fluid responsiveness
More collaboration leads to real-time feedback and greater efficiency; changes and improvements can be
implemented quicker and guesswork is removed.
Shorter cycle time
Improved efficiency and frequent communication between teams shortens cycle time; new code can be
released more rapidly while maintaining quality and security.

You might also like