Software Engineering
Software Engineering
Software Engineering
Horizon and Software Myths, Software Engineering: A Layered Technology, Software Process Models, The Linear
Sequential Model, The Prototyping Model, The RAD Model, Evolutionary Process Models, Agile Process Model,
Component - Based Development, Process, Product and Process.Agility and Agile Process model, Extreme
Programming, Other process models of Agile Development and Tools.
Our Software Engineering Tutorial contains all the topics of Software Engineering like Software Engineering Models,
Software Development Life Cycle, Requirement Engineering, Software Design tools, Software Design Strategies,
Software Design levels, Software Project Management, Software Management activities, Software Management
Tools, Software Testing levels, Software Testing approaches, Quality Assurance Vs. Quality control, Manual Testing,
Software Maintenance, Software Re-engineering and Software Development Tool such as CASE Tool.
The term software engineering is the product of two words, software, and engineering.
Software subsists of carefully-organized instructions and code written by developers on any of various particular
computer languages.
Computer programs and related documentation such as requirements, design models and user manuals.
Engineering is the application of scientific and practical knowledge to invent, design, build, maintain, and improve
frameworks, processes, etc.
Software Engineering is an engineering branch related to the evolution of software product using well-defined
scientific principles, techniques, and procedures. The result of software engineering is an effective and reliable
software product.
Advertisement
o To manage Large software
o Cost Management
The necessity of software engineering appears because of a higher rate of progress in user requirements and the
environment on which the program is working.
o Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly, as the measure
of programming become extensive engineering has to step to give it a scientific process.
o Adaptability: If the software procedure were not based on scientific and engineering ideas, it would be
simpler to re-create new software than to scale an existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let down the cost of
computer and electronic hardware. But the cost of programming remains high if the proper process is not
adapted.
o Dynamic Nature: The continually growing and adapting nature of programming hugely depends upon the
environment in which the client works. If the quality of the software is continually changing, new upgrades
need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and quality software
product.
The features that good software engineers should possess are as follows:
1. Reduces complexity: Big software is always complicated and challenging to progress. Software engineering
has a great solution to reduce the complication of any project. Software engineering divides big problems
into various small issues. And then start solving each small issue one by one. All these small problems are
solved independently to each other.
2. To minimize software cost: Software needs a lot of hardwork and software engineers are highly paid experts.
A lot of manpower is required to develop software with a large number of codes. But in software
engineering, programmers project everything and decrease all those things that are not needed. In turn, the
cost for software productions becomes less as compared to any software that does not use software
engineering method.
3. To decrease time: Anything that is not made according to the project always wastes time. And if you are
making great software, then you may need to run many codes to get the definitive running code. This is a
very time-consuming procedure, and if it is not well handled, then this can take a lot of time. So if you are
making your software according to the software engineering method, then it will decrease a lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of patience, planning,
and management. And to invest six and seven months of any company, it requires heaps of planning,
direction, testing, and maintenance. No one can say that he has given four months of a company to the task,
and the project is still in its first stage. Because the company has provided many resources to the plan and it
should be completed. So to handle a big project without any problem, the company has to go for a software
engineering method.
5. Reliable software: Software should be secure, means if you have delivered the software, then it should work
for at least its given time or subscription. And if any bugs come in the software, the company is responsible
for solving all these bugs. Because in software engineering, testing and maintenance are given, so there is no
worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards. Software standards are
the big target of companies to make it more effective. So Software becomes more effective in the act with
the help of software engineering.
The software evolution process includes fundamental activities of change analysis, release planning, system
implementation, and releasing a system to customers.
1. The cost and impact of these changes are accessed to see how much the system is affected by the change
and how much it might cost to implement the change.
2. If the proposed changes are accepted, a new release of the software system is planned.
3. During release planning, all the proposed changes (fault repair, adaptation, and new functionality) are
considered.
4. A design is then made on which changes to implement in the next version of the system.
5. The process of change implementation is an iteration of the development process where the revisions to the
system are designed, implemented, and tested.
1. Change in requirement with time: With time, the organization’s needs and modus Operandi of working
could substantially be changed so in this frequently changing time the tools(software) that they are using
need to change to maximize the performance.
2. Environment change: As the working environment changes the things(tools) that enable us to work in that
environment also changes proportionally same happens in the software world as the working environment
changes then, the organizations require reintroduction of old software with updated features and
functionality to adapt the new environment.
3. Errors and bugs: As the age of the deployed software within an organization increases their preciseness or
impeccability decrease and the efficiency to bear the increasing complexity workload also continually
degrades. So, in that case, it becomes necessary to avoid use of obsolete and aged software. All such
obsolete Pieces of software need to undergo the evolution process in order to become robust as per the
workload complexity of the current environment.
4. Security risks: Using outdated software within an organization may lead you to at the verge of various
software-based cyberattacks and could expose your confidential data illegally associated with the software
that is in use. So, it becomes necessary to avoid such security breaches through regular assessment of the
security patches/modules are used within the software. If the software isn’t robust enough to bear the
current occurring Cyber attacks so it must be changed (updated).
5. For having new functionality and features: In order to increase the performance and fast data processing
and other functionalities, an organization need to continuously evolute the software throughout its life cycle
so that stakeholders & clients of the product could work efficiently.
Software Evolution
This law states that any software system that represents some real-world reality undergoes continuous change or
become progressively less useful in that environment.
As an evolving program changes, its structure becomes more complex unless effective efforts are made to avoid this
phenomenon.
Over the lifetime of a program, the rate of development of that program is approximately constant and independent
of the resource devoted to system development.
This law states that during the active lifetime of the program, changes made in the successive release are almost
constant.
The term “software crisis” refers to the numerous challenges and difficulties faced by the software industry during
the 1960s and 1970s. It became clear that old methods of developing software couldn’t keep up with the growing
complexity and demands of new projects. This led to high costs, delays, and poor-quality software. New
methodologies and tools were needed to address these issues.
Software Crisis is a term used in computer science for the difficulty of writing useful and efficient computer programs
in the required time. The software crisis was due to using the same workforce, same methods, and same tools even
though rapidly increasing software demand, the complexity of software, and software challenges. With the increase
in software complexity, many software problems arose because existing methods were insufficient.
Suppose we use the same workforce, same methods, and same tools after the fast increase in software demand,
software complexity, and software challenges. In that case, there arise some issues like software budget problems,
software efficiency problems, software quality problems, software management, and delivery problems, etc. This
condition is called a Software Crisis.
Software Crisis
• The cost of owning and maintaining software was as expensive as developing the software.
1. Many causes of the software crisis can be traced to mythology based on [UGC NET 2011]
Conclusion
Software crisis refers to the challenges faced in developing efficient and useful computer programs due to increasing
complexity and demands. Factors like poor project management, inadequate training, and low productivity
contribute to this crisis. Addressing these issues through systematic approaches like software engineering, with a
focus on budget control, quality, timeliness, and skilled workforce, can mitigate the impact of the crisis.
The main causes of the Software Crisis are low-quality Software or when the Software does not meet user
requirements.
One of the famous software failures in computer science is Therac-25. It is a machine that is used to deliver radiation
therapy to Cancer Patients.
The impact of the Software Crisis is that it affects the development of new software and also creates problems in the
maintenance of older software.
Software Myths:
Most, experienced experts have seen myths or superstitions (false beliefs or interpretations) or misleading attitudes
(naked users) which creates major problems for management and technical people. The types of software-related
myths are listed below.
Software Myths in Software Engineering
Software Myths are beliefs that do not have any pure evidence. Software myths may lead to many
misunderstandings, unrealistic expectations, and poor decision-making in software development projects. Some
common software myths include:
o The Myth of Perfect Software: Assuming that it's possible to create bug-free software. In Reality, software is
inherently complex, and it's challenging to eliminate all defects.
o The Myth of Short Development Times: Assuming that software can be developed quickly without proper
planning, design, and testing. In Reality, rushing the development process can lead to better-quality software
and missed deadlines.
o The Myth of Linear Progression: The Development of software is in a linear, predictable manner. In Reality,
development is often iterative and can involve unexpected setbacks and changes.
o The Myth of No Maintenance: It is thought that software development is complete once the initial version is
released. But in reality, the software requires maintenance and updates to remain functional and secure.
o The Myth of User-Developer Mind Reading: It is assumed that developers can only understand user needs
with clear and ongoing communication with users. But in reality, user feedback and collaboration are
essential for correct software development.
o The Myth of Cost Predictability: It is thought that the cost of the software can be easily predicted, but in
reality, many factors can influence project costs, and estimates are often subject to change. There are many
hidden costs available.
o The Myth of Endless Features: It is believed that adding more features to software will make it better. But in
reality, adding more features to the software can make it complex and harder to use and maintain. It may
often lead to a worse user experience.
o The Myth of No Testing Needed: It is assumed that there is no need to test the software if the coder is skilled
or the code looks good. But in reality, thorough testing is essential to catch hidden defects and ensure
software reliability.
o The Myth of One-Size-Fits-All Methodologies: Thinking that a single software development methodology is
suitable for all projects. But in reality, different methodologies should be used on the specific project.
o The Myth of "We'll Fix It Later": It is assumed that a bug can be fixed at a later stage. But in reality, as the
code gets longer and bigger, it takes a lot of work to find and fix the bug. These issues can lead to increased
costs and project delays.
o The Myth of All Developers Are Interchangeable: It is believed that any developer can replace another
without any impact. But in reality, each developer has unique skills and knowledge that can significantly
affect the project. Each one has a different method to code, find, and fix the bugs.
o The Myth of No User Training Required: It is assumed that users will understand and use new software
without any training. But in reality, users need training and documentation to use the new software because
the different methods used by different developers can be unique.
o More Developers Equal Faster Development: It is believed that if there are large no of coders, then the
software development would take less time, and the quality of the software would be of high quality. But in
reality, larger teams can lead to communication overhead and may only sometimes result in faster
development.
o Perfect Software Is Possible: The Idea of creating completely bug-free software is a myth. In Reality, software
development is complex, and it's challenging to eliminate all defects.
o Zero-Risk Software: It is assumed that it's possible to develop software with absolutely no risks. But in reality,
all software projects involve some level of risk, and risk management is a critical part of software
development.
Understanding and addressing these software myths is important for successful software development projects. It
helps in setting realistic expectations, improving communication, and making more informed decisions throughout
the development process.
Software myths in software engineering can have several significant disadvantages and negative consequences, as
they can lead to unrealistic expectations, poor decision-making, and a lack of alignment between stakeholders. Here
are some of the disadvantages of software myths in software engineering:
o Unrealistic Expectations: Software myths can create disappointment and frustration among the stakeholders
and developers. Sometimes, the fake myth may lead to the no use of the software when it is completely safe.
o Project Delays: Software myths will lead to more delays in the completion of the projects and also increase
the completion time of the projects.
o Poor Quality Software: Myths such as "we can fix it later" or "we don't need extensive testing" can lead to
poor software quality. Neglecting testing and quality assurance can result in buggy and unreliable software.
o Scope Creep: Myths like "fixed requirements" can lead to scope creep as stakeholders may change their
requirements or expectations throughout the project. This can result in a never-ending development cycle.
o Ineffective Communication: Believing in myths can affect good communication within development teams
and between teams and clients. Clear and open communication is crucial for project success, and myths can
lead to misunderstandings and misalignment.
o Wasted Resources: The Idea of getting "the perfect software" can result in the allocation of unnecessary
resources, both in terms of time and money, which could be better spent elsewhere.
o Customer Dissatisfaction: Unrealistic promises made based on myths can lead to customer dissatisfaction.
When software doesn't meet exaggerated expectations, clients may be disappointed and dissatisfied.
o Reduced Productivity: Myths can lead to reduced productivity, as team members may spend time on
unnecessary tasks or follow counterproductive processes based on these myths.
o Increased Risk of Project Failure: The reliance on myths can significantly increase the risk of project failure.
Failure to address these myths can lead to project cancellations, loss of investments, and negative impacts on
an organization's reputation.
o Decreased Competitiveness: Belief in myths can make an organization less competitive in the market. It can
hinder an organization's ability to innovate and adapt.
The term software specifies to the set of computer programs, procedures and associated documents (Flowcharts,
manuals, etc.) that describe the program and how they are to be used.
A software process is the set of activities and associated outcome that produce a software product. Software
engineers mostly carry out these activities. These are four key process activities, which are common to all software
processes. These activities are:
1. Software specifications: The functionality of the software and constraints on its operation must be defined.
3. Software validation: The software must be validated to ensure that it does what the customer wants.
4. Software evolution: The software must evolve to meet changing client needs.
A software process model is a specified definition of a software process, which is presented from a particular
perspective. Models, by their nature, are a simplification, so a software process model is an abstraction of the actual
process, which is being described. Process models may contain activities, which are part of the software process,
software product, and the roles of people involved in software engineering. Some examples of the types of software
process models that may be produced are:
1. A workflow model: This shows the series of activities in the process along with their inputs, outputs and
dependencies. The activities in this model perform human actions.
2. 2. A dataflow or activity model: This represents the process as a set of activities, each of which carries out
some data transformations. It shows how the input to the process, such as a specification is converted to an
output such as a design. The activities here may be at a lower level than activities in a workflow model. They
may perform transformations carried out by people or by computers.
3. 3. A role/action model: This means the roles of the people involved in the software process and the activities
for which they are responsible.
1. The waterfall approach: This takes the above activities and produces them as separate process phases such
as requirements specification, software design, implementation, testing, and so on. After each stage is
defined, it is "signed off" and development goes onto the following stage.
2. Evolutionary development: This method interleaves the activities of specification, development, and
validation. An initial system is rapidly developed from a very abstract specification.
3. Formal transformation: This method is based on producing a formal mathematical system specification and
transforming this specification, using mathematical methods to a program. These transformations are
'correctness preserving.' This means that you can be sure that the developed programs meet its specification.
4. System assembly from reusable components: This method assumes the parts of the system already exist.
The system development process target on integrating these parts rather than developing them from scratch.
Software Crisis
1. Size: Software is becoming more expensive and more complex with the growing complexity and expectation
out of software. For example, the code in the consumer product is doubling every couple of years.
2. Quality: Many software products have poor quality, i.e., the software products defects after putting into use
due to ineffective testing technique. For example, Software testing typically finds 25 errors per 1000 lines of
code.
3. Cost: Software development is costly i.e. in terms of time taken to develop and the money involved. For
example, Development of the FAA's Advanced Automation System cost over $700 per lines of code.
4. Delayed Delivery: Serious schedule overruns are common. Very often the software takes longer than the
estimated time to develop, which in turn leads to cost shooting up. For example, one in four large-scale
development projects is never completed.
Software is more than programs. Any program is a subset of software, and it becomes software only if
documentation & operating procedures manuals are prepared.
2. Documentation: Documentation consists of different types of manuals. Examples of documentation manuals are:
Data Flow Diagram, Flow Charts, ER diagrams, etc.
3. Operating Procedures: Operating Procedures consist of instructions to set up and use the software system and
instructions on how react to the system failure. Example of operating system procedures manuals is: installation
guide, Beginner's guide, reference guide, system administration guide, etc.
2. Quick Decision
3. Build a Prototype
5. Prototype Refinement
6. Engineer Product
6. Errors can be detected much earlier as the system is made side by side.
4. Easy to fall back into the code and fix without proper requirement analysis, design, customer evaluation, and
feedback.
7. It is a time-consuming process.
RAD is a linear sequential software development process model that emphasizes a concise development cycle using
an element based construction approach. If the requirements are well understood and described, and the project
scope is a constraint, the RAD process enables a development team to create a fully functional system within a
concise time period.
RAD (Rapid Application Development) is a concept that products can be developed faster and of higher quality
through:
o A rigidly paced schedule that refers design improvements to the next product version
1.Business Modelling: The information flow among business functions is defined by answering questions like what
data drives the business process, what data is generated, who generates it, where does the information go, who
process it and so on.
2. Data Modelling: The data collected from business modeling is refined into a set of data objects (entities) that are
needed to support the business. The attributes (character of each entity) are identified, and the relation between
these data objects (entities) is defined.
Advertisement
3. Process Modelling: The information object defined in the data modeling phase are transformed to achieve the
data flow necessary to implement a business function. Processing descriptions are created for adding, modifying,
deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to facilitate construction of the software; even they use the 4th
GL techniques.
5. Testing & Turnover: Many of the programming components have already been tested since RAD emphasis reuse.
This reduces the overall testing time. But the new part must be tested, and all interfaces must be fully exercised.
o When the system should need to create the project that modularizes in a short span time (2-3 months).
o When there's a necessity to make a system, which modularized in 2-3 months of period.
o It should be used only if the budget allows the use of automatic code generating tools.
A software process model is a structured representation of the activities of the software development process.
During the development of software, various steps that are important for the successful development of the project
are taken and if we structured them according to the proper order in a model then it is called a software process
model. The software process model includes various activities such as steps like planning, designing, implementation,
defining tasks, setting up milestones, roles, and responsibilities, etc.
The evolutionary model is based on the concept of making an initial product and then evolving the software product
over time with iterative and incremental approaches with proper feedback. In this type of model, the product will go
through several iterations and come up when the final product is built through multiple iterations. The development
is carried out simultaneously with the feedback during the development. This model has a number of advantages
such as customer involvement, taking feedback from the customer during development, and building the exact
product that the user wants. Because of the multiple iterations, the chances of errors get reduced and the reliability
and efficiency will increase.
Evolutionary Model
1. Iterative Model
In the iterative model first, we take the initial requirements then we enhance the product over
multiple iterations until the final product gets ready. In every iteration, some design modifications
were made and some changes in functional requirements is added. The main idea behind this
approach is to build the final product through multiple iterations that result in the final product being
almost the same as the user wants with fewer errors and the performance, and quality would be
high.
2. Incremental Model
In the incremental model, we first build the project with basic features and then evolve the project in
every iteration, it is mainly used for large projects. The first step is to gather the requirements and
then perform analysis, design, code, and test and this process goes the same over and over again
until our final project is ready.
3. Spiral Model
The spiral model is a combination of waterfall and iterative models and in this, we focused on risk
handling along with developing the project with the incremental and iterative approach, producing
the output quickly as well as it is good for big projects. The software is created through multiple
iterations using a spiral approach. Later on, after successive development the final product will
develop, and the customer interaction is there so the chances of error get reduced.
1. During the development phase, the customer gives feedback regularly because the customer’s requirement
gets clearly specified.
4. The first build gets delivered quickly as it used an iterative and incremental approach.
5. Enhanced Flexibility: The iterative nature of the model allows for continuous changes and refinements to be
made, accommodating changing requirements effectively.
6. Risk Reduction: The model’s emphasis on risk analysis during each iteration helps in identifying and
mitigating potential issues early in the development process.
7. Adaptable to Changes: Since changes can be incorporated at the beginning of each iteration, it is well-suited
for projects with evolving or uncertain requirements.
8. Customer Collaboration: Regular customer feedback throughout the development process ensures that the
end product aligns more closely with the customer’s needs and expectations.
2. The complexity of the spiral model can be more than the other sequential models.
4. roject Management Complexity: The iterative nature of the model can make project management and
tracking more complex compared to linear models.
5. Resource Intensive: The need for continuous iteration and customer feedback demands a higher level of
resources, including time, personnel, and tools.
6. Documentation Challenges: Frequent changes and iterations can lead to challenges in maintaining accurate
and up-to-date documentation.
7. Potential Scope Creep: The flexibility to accommodate changes can sometimes lead to an uncontrolled
expansion of project scope, resulting in scope creep.
8. Initial Planning Overhead: The model’s complexity requires a well-defined initial plan, and any deviations or
adjustments can be time-consuming and costly.
1. Requirements gathering
3. Construction/ iteration
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You should explain business
opportunities and plan the time and effort needed to build the project. Based on this information, you can evaluate
technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders to define requirements.
You can use the user flow diagram or the high-level UML diagram to show the work of new features and show how it
will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins. Designers and developers
start working on their project, which aims to deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives feedback about the
product and works through the feedback.
o Scrum
o Crystal
o eXtreme Programming(XP)
1. Frequent Delivery
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken throughout various
phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers allotted to another
project, maintenance of the finished project can become a difficulty.
It not only identifies candidate components but also qualifies each component’s interface, adapts components to
remove architectural mismatches, assembles components into a selected architectural style, and updates
components as requirements for the system change.
The process model for component-based software engineering occurs concurrently with component-based
development.
Component-based development:
Component-based development (CBD) is a CBSE activity that occurs in parallel with domain engineering. Using
analysis and architectural design methods, the software team refines an architectural style that is appropriate for
the analysis model created for the application to be built.
1. Component Qualification: This activity ensures that the system architecture defines the requirements of the
components for becoming a reusable components. Reusable components are generally identified through
the traits in their interfaces. It means “the services that are given and the means by which customers or
consumers access these services ” are defined as a part of the component interface.
2. Component Adaptation: This activity ensures that the architecture defines the design conditions for all
components and identifies their modes of connection. In some cases, existing reusable components may not
be allowed to get used due to the architecture’s design rules and conditions. These components should
adapt and meet the requirements of the architecture or be refused and replaced by other, more suitable
components.
3. Component Composition: This activity ensures that the Architectural style of the system integrates the
software components and forms a working system. By identifying the connection and coordination
mechanisms of the system, the architecture describes the composition of the end product.
4. Component Update: This activity ensures the updation of reusable components. Sometimes, updates are
complicated due to the inclusion of third-party (the organization that developed the reusable component
may be outside the immediate control of the software engineering organization accessing the component
currently).
Want to learn Software Testing and Automation to help give a kickstart to your career? Any student or
professional looking to excel in Quality Assurance should enroll in our course, Complete Guide to Software
Testing and Automation, only on GeeksforGeeks. Get hands-on learning experience with the latest testing
methodologies, automation tools, and industry best practices through practical projects and real-life scenarios.
Whether you are a beginner or just looking to build on existing skills, this course will give you the competence
necessary to ensure the quality and reliability of software products. Ready to be a Pro in Software Testing? Enroll
now and Take Your Career to a Whole New Level!
• Extreme Programming
What is Extreme Programming (XP)?
Extreme Programming (XP) is an Agile software development methodology that focuses on delivering high-quality
software through frequent and continuous feedback, collaboration, and adaptation. XP emphasizes a close working
relationship between the development team, the customer, and stakeholders, with an emphasis on rapid, iterative
development and deployment.
Agile development approaches evolved in the 1990s as a reaction to documentation and bureaucracy-based
processes, particularly the waterfall approach. Agile approaches are based on some common principles, some of
which are:
2. For progress in a project, therefore software should be developed and delivered rapidly in small increments.
5. Continuous feedback and involvement of customers are necessary for developing good-quality software.
6. A simple design that involves and improves with time is a better approach than doing an elaborate design up
front for handling all possible scenarios.
Extreme programming is one of the most popular and well-known approaches in the family of agile methods. an XP
project starts with user stories which are short descriptions of what scenarios the customers and users would like the
system to support. Each story is written on a separate card, so they can be flexibly grouped.
Some of the good practices that have been recognized in the extreme programming model and suggested to
maximize their use are given below:
Unit - 2 Managing Software Project Software Metrics (Process, Product and Project Metrics), Software
Project Estimations, Software Project Planning (MS Project Tool), Project Scheduling & Tracking, Risk
Analysis & Management (Risk Identification, Risk Projection, Risk Refinement , Risk Mitigation).
Understanding the Requirement, Requirement Modelling, Requirement Specification (SRS), Requirement
Analysis and Requirement Elicitation, Requirement Engineering. Design Concepts and Design Principal,
Architectural Design,Component Level Design, User Interface Design, Web Application Design
What is Project?
A project is a group of tasks that need to complete to reach a clear result. A project also defines as a set of inputs and
outputs which are required to achieve a goal. Projects can vary from simple to difficult and can be operated by one
person or a hundred.
Projects usually described and approved by a project manager or team executive. They go beyond their expectations
and objects, and it's up to the team to handle logistics and complete the project on time. For good project
development, some teams split the project into specific tasks so they can manage responsibility and utilize team
strengths.
It is a procedure of managing, allocating and timing resources to develop computer software that fulfills
requirements.
In software Project Management, the client and the developers need to know the length, period and cost of the
project.
There are three needs for software project management. These are:
1. Time
2. Cost
3. Quality
It is an essential part of the software organization to deliver a quality product, keeping the cost within the client?s
budget and deliver the project as per schedule. There are various factors, both external and internal, which may
impact this triple factor. Any of three-factor can severely affect the other two.
Project Manager
A project manager is a character who has the overall responsibility for the planning, design, execution, monitoring,
controlling and closure of a project. A project manager represents an essential role in the achievement of the
projects.
A project manager is a character who is responsible for giving decisions, both large and small projects. The project
manager is used to manage the risk and minimize uncertainty. Every decision the project manager makes must
directly profit their project.
1. Leader
A project manager must lead his team and should provide them direction to make them understand what is expected
from all of them.
2. Medium:
The Project manager is a medium between his clients and his team. He must coordinate and transfer all the
appropriate information from the clients to his team and report to the senior management.
3. Mentor:
He should be there to guide his team at each step and make sure that the team has an attachment. He provides a
recommendation to his team and points them in the right direction.
2. Create the project team and assigns tasks to several team members.
A software metric is a measure of software characteristics which are measurable or countable. Software metrics are
valuable for many reasons, including measuring software performance, planning work items, measuring productivity,
and many other uses.
Within the software development process, many metrics are that are all connected. Software metrics are similar to
the four functions of management: Planning, Organization, Control, or Improvement.
1. Product Metrics: These are the measures of various characteristics of the software product. The two important
software characteristics are:
2. Process Metrics: These are the measures of various characteristics of the software development process. For
example, the efficiency of fault detection. They are used to measure the characteristics of methods, techniques, and
tools that are used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed to be of greater
importance to a software developer. For example, Lines of Code (LOC) measure.
External metrics: External metrics are the metrics used for measuring properties that are viewed to be of greater
importance to the user, e.g., portability, reliability, functionality, usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource metrics. For example,
cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the project's progress. Data
from the past projects are used to collect various metrics, like time and cost; these estimates are used as a base of
new software. Note that as the project proceeds, the project manager will check its progress from time-to-time and
will compare the effort, cost, and time with the original effort, cost and time. Also understand that these metrics are
used to decrease the development costs, time efforts and risks. The project quality can also be improved. As quality
improves, the number of errors and time, as well as cost required, is also reduced.
For analysis, comparison, and critical study of different programming language concerning their characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software development.
In making inference about the effort to be put in the design and development of the software systems.
In comparison and making design tradeoffs between software development and maintenance cost.
In providing feedback to software managers about the progress and quality during various phases of the software
development life cycle.
The application of software metrics is not always easy, and in some cases, it is difficult and costly.
The verification and justification of software metrics are based on historical/empirical data whose validity is difficult
to verify.
These are useful for managing software products but not for evaluating the performance of the technical staff.
The definition and derivation of Software metrics are usually based on assuming which are not standardized and may
depend upon tools available and working environment.
Most of the predictive models rely on estimates of certain variables which are often not known precisely.
Project size estimation is determining the scope and resources required for the project.
1. It involves assessing the various aspects of the project to estimate the effort, time, cost, and resources
needed to complete the project.
2. Accurate project size estimation is important for effective and efficient project planning, management, and
execution.
Here are some of the reasons why project size estimation is critical in project management:
1. Financial Planning: Project size estimation helps in planning the financial aspects of the project, thus helping
to avoid financial shortfalls.
2. Resource Planning: It ensures the necessary resources are identified and allocated accordingly.
3. Timeline Creation: It facilitates the development of realistic timelines and milestones for the project.
4. Identifying Risks: It helps to identify potential risks associated with overall project execution.
5. Detailed Planning: It helps to create a detailed plan for the project execution, ensuring all the aspects of the
project are considered.
6. Planning Quality Assurance: It helps in planning quality assurance activities and ensuring that the project
outcomes meet the required standards.
Here are the key roles involved in estimating the project size:
1. Project Manager: Project manager is responsible for overseeing the estimation process.
2. Subject Matter Experts (SMEs): SMEs provide detailed knowledge related to the specific areas of the project.
3. Business Analysts: Business Analysts help in understanding and documenting the project requirements.
4. Technical Leads: They estimate the technical aspects of the project such as system design, development,
integration, and testing.
5. Developers: They will provide detailed estimates for the tasks they will handle.
6. Financial Analysts: They provide estimates related to the financial aspects of the project including labor
costs, material costs, and other expenses.
7. Risk Managers: They assess the potential risks that could impact the projects’ size and effort.
1. Expert Judgment: In this technique, a group of experts in the relevant field estimates the project size based
on their experience and expertise. This technique is often used when there is limited information available
about the project.
2. Analogous Estimation: This technique involves estimating the project size based on the similarities between
the current project and previously completed projects. This technique is useful when historical data is
available for similar projects.
3. Bottom-up Estimation: In this technique, the project is divided into smaller modules or tasks, and each task
is estimated separately. The estimates are then aggregated to arrive at the overall project estimate.
4. Three-point Estimation: This technique involves estimating the project size using three values: optimistic,
pessimistic, and most likely. These values are then used to calculate the expected project size using a formula
such as the PERT formula.
5. Function Points: This technique involves estimating the project size based on the functionality provided by
the software. Function points consider factors such as inputs, outputs, inquiries, and files to arrive at the
project size estimate.
6. Use Case Points: This technique involves estimating the project size based on the number of use cases that
the software must support. Use case points consider factors such as the complexity of each use case, the
number of actors involved, and the number of use cases.
7. Parametric Estimation: For precise size estimation, mathematical models founded on project parameters and
historical data are used.
8. COCOMO (Constructive Cost Model): It is an algorithmic model that estimates effort, time, and cost in
software development projects by taking into account several different elements.
9. Wideband Delphi: Consensus-based estimating method for balanced size estimations that combines expert
estimates from anonymous experts with cooperative conversations.
10. Monte Carlo Simulation: This technique, which works especially well for complicated and unpredictable
projects, estimates project size and analyses hazards using statistical methods and random sampling.
Each of these techniques has its strengths and weaknesses, and the choice of technique depends on various factors
such as the project’s complexity, available data, and the expertise of the team.
Estimation of the size of the software is an essential part of Software Project Management. It helps the project
manager to further predict the effort and time that will be needed to build the project. Here are some of the
measures that are used in project size estimation:
As the name suggests, LOC counts the total number of lines of source code in a project. The units of LOC are:
Gantt chart
Gantt Chart first developed by Henry Gantt in 1917. Gantt chart usually utilized in project management, and it is one
of the most popular and helpful ways of showing activities displayed against time. Each activity represented by a bar.
Gantt chart is a useful tool when you want to see the entire landscape of either one or multiple projects. It helps you
to view which tasks are dependent on one another and which event is coming up.
Advertisement
PERT chart
PERT is an acronym of Programme Evaluation Review Technique. In the 1950s, it is developed by the U.S. Navy to
handle the Polaris submarine missile programme.
In Project Management, PERT chart represented as a network diagram concerning the number of nodes, which
represents events.
The direction of the lines indicates the sequence of the task. In the above example, tasks between "Task 1 to Task 9"
must complete, and these are known as a dependent or serial task. Between Task 4 and 5, and Task 4 and 6, nodes
are not depended and can undertake simultaneously. These are known as Parallel or concurrent tasks. Without
resource or completion time, the task must complete in the sequence which is considered as event dependency, and
these are known as Dummy activity and represented by dotted lines.
Logic Network
The Logic Network shows the order of activities over time. It shows the sequence in which activities are to do.
Distinguishing events and pinning down the project are the two primary uses. Moreover, it will help with
understanding task dependencies, a timescale, and overall project workflow.
It is an important project deliverable that classifies the team's work into flexible segments. "Project Management
Body of Knowledge (PMBOK)" is a group of terminology that describes the work breakdown structure as a
"deliverable-oriented hierarchical breakdown of the work which is performed by the project team."
There are two ways to generate a Work Breakdown Structure ? The top-down and
In the top-down approach, the WBS derived by crumbling the overall project into subprojects or lower-level tasks.
The bottom-up approach is more alike to a brainstorming exercise where team members are asked to make a list of
low-level tasks which is required to complete the project.
Resource Histogram
The resource histogram is precisely a bar chart that used for displaying the amounts of time that a resource is
scheduled to be worked on over a prearranged and specific period. Resource histograms can also contain the related
feature of resource availability, used for comparison on purposes of contrast.
Critical path analysis is a technique that is used to categorize the activities which are required to complete a task, as
well as classifying the time which is needed to finish each activity and the relationships between the activities. It is
also called a critical path method. CPA helps in predicting whether a project will expire on time.
Project Planning is an important activity performed by Project Managers. Project Managers can use the tools and
techniques to develop, monitor, and control project timelines and schedules. The tracking tools can automatically
produce a pictorial representation of the project plan. These tools also instantly update time plans as soon as new
information is entered and produce automatic reports to control the project. Scheduling tools also look into Task
breakdown and Risk management also with greater accuracy and ease of monitoring the reports. It also provides a
good GUI to effectively communicate with the stakeholders of the project.
• Time management: The project scheduling tools keep projects running the way it is planned. There will be
proper time management and better scheduling of the tasks.
• Resource allocation: It provides the resources required for project development. There will be proper
resource allocation and it helps to make sure that proper permissions are given to different individuals
involved in the project. It helps to monitor and control all resources in the project.
• Team collaboration: The project scheduling tool improves team collaboration and communication. It helps to
make it easy to comment and chat within the platform without relying on external software.
• User-friendly interface: Good project scheduling tools are designed to be more user-friendly to enable teams
to complete projects in a better and more efficient way.
• Defines work tasks: The project scheduling tool defines the work tasks of a project.
• Time and resource management: It helps to keep the project on track with respect to the time and plan.
• Improved projectivity: It enables greater productivity in teams as it helps in smarter planning, better
scheduling, and better task delegation.
• Increased efficiency: The project scheduling tool increases speed and efficiency in project development.
• Capability to handle multiple projects: The scheduling tool must handle multiple projects at a time.
• Budget friendly: The tool should be of low cost and should be within the development budget.
• Security features: The tool must be secured and risk-free from vulnerable threats.
1. Microsoft Project
3. Monday.com
4. ProjectManager.com
5. SmartTask
6. ProofHub
7. Asana
8. Wrike
9. GanttPRO
1. Microsoft Project
Microsoft offers a Project Management tool named Microsoft Project for Project Planning activities. It is simple to use
Microsoft Projects for scheduling projects. It generates a variety of reports and templates as per Industry standards.
It can produce data in diagrams or charts in pictorial form. Themes and templates can be customized as per the user.
It supports cloud services and can share data remotely with other users.
Features:
DART Daily Activity Reporting Tool, enables you to track the changes to record made by users. Many organizations
use the DART tool which monitors the progress of the software project. DART collects the project data and keeps
track of the activities in the process. DART which tracks the progress of the project is called an indicator that
computes the project plan. The DART is used to scale the software project stakeholders regarding the performance
and updates of the project.
Features:
• Risk Analysis & Management (Risk Identification, Risk Projection, Risk Refinement ,
Risk Mitigation)
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem that could cause some loss or
threaten the progress of the project, but which has not happened yet.
These potential issues might harm cost, schedule or technical success of the project and the quality of our software
device, or project team morale.
Risk Management is the system of identifying addressing and eliminating these problems before they can damage
the project.
We need to differentiate risks, as potential issues, from the current problems of the project.
Using different technologies, software developers add new features in Software Development. Software system
vulnerabilities grow in combination with technology. Software goods are therefore more vulnerable to
malfunctioning or performing poorly.
Many factors, including timetable delays, inaccurate cost projections, a lack of resources, and security hazards,
contribute to the risks associated with software in Software Development.
Therefore, it’s critical to identify, priorities, and reduce risk or take proactive preventative action during the software
development process, as opposed to monitoring risk possibilities.
Unknown Unknowns
These risks are unknown to the organization and are generally technology related risk due to this these risks are not
anticipated. Organizations might face unexpected challenges, delays, or failures due to these unexpected risks. Lack
of experience with a particular tool or technology can lead to difficulties in implementation.
Example
Suppose an organization is using cloud service from third-party vendors, due to some issues third party vendor
unable to provide its service. In this situation organization have to face an unexpected delay.
Known Knowns
These are risks that are well-understood and documented by the team. Since these risks are identified early, teams
can plan for mitigation strategies. The impact of known knowns is usually more manageable compared to unknown
risks.
Example
The shortage of developers is a known risk that can cause delays in software development.
Known Unknowns
In this case, the organization is aware of potential risks, but the certainty of their occurrence is uncertain.
Organization should get ready to deal with these risks if they happen. Ways to deal with them might include making
communication better, making sure everyone understands what’s needed, or creating guidelines for how to manage
possible misunderstandings.
Example
The team may be aware of the risk of miscommunication with the client, but whether it will actually happen is
unknown.
Given below table shows the type of risk and their impact with example:
• Incomplete or
inaccurate
requirements
• Unforeseen
Risks arising from technical
Technical risks can lead to
technical challenges or complexities
delays, cost overruns, and
limitations in the
even software failure if • Integration issues
software development
not properly managed. with third-party
process.
systems
• Inadequate
testing and
Technical risks quality assurance
• Insecure coding
practices
• Lack of proper
Risks related to access controls
Security risks can lead to
vulnerabilities in the
financial losses, • Vulnerabilities in
software that could allow
reputational damage, and third-party
unauthorized access or
legal liabilities. libraries
data breaches.
• Insufficient data
security
Security risks measures
• Inadequate
infrastructure
capacity
Risks associated with the • Inefficient
Scalability risks can lead
software’s ability to algorithms or
to performance
handle increasing data structures
bottlenecks, outages, and
workloads or user
lost revenue. • Lack of scalability
demands.
testing
• Poorly designed
Scalability risks architecture
Type of Risk Description Impact Examples
• Inefficient
algorithms or
data structures
Risks related to the Performance risks can
software’s ability to meet lead to user • Excessive
performance dissatisfaction, lost memory or CPU
expectations in terms of productivity, and usage
speed, responsiveness, competitive • Poor database
and resource utilization. disadvantage. performance
• Network latency
Performance risks issues
• Unrealistic cost
estimates
• Scope creep or
changes in
requirements
Risks associated with Budgetary risks can lead • Unforeseen
exceeding the project’s to financial strain, project expenses, such as
budget or financial delays, and even third-party
constraints. cancellation. licenses or
hardware
upgrades
• Inefficient
resource
Budgetary risks utilization
• Unclear or
ambiguous
contract terms
• Failure to comply
Risks arising from legal or Contractual and legal with intellectual
contractual obligations risks can lead to disputes, property laws
that are not properly delays, and even legal • Data privacy
understood or managed. action. violations
• Lack of proper
documentation
and record-
Contractual & legal risks keeping
Type of Risk Description Impact Examples
• Inadequate
monitoring and
alerting systems
• Lack of proper
Risks associated with the disaster recovery
Operational risks can lead plans
ongoing operation and
to downtime, outages,
maintenance of the • Insufficient
and data loss.
software system. training for
operational staff
• Poor change
management
Operational risks practices
• Unrealistic
timelines or
milestones
• Underestimation
Risks related to delays in Schedule risks can lead to of task
the software increased costs, pressure complexity
development process or on resources, and missed
missed deadlines. market opportunities. • Resource
dependencies or
conflicts
• Unforeseen
Schedule risks events or delays
In order to conduct risk analysis in software development, first you have to evaluate the source code in detail to
understand its component. This evaluation is done to address components of code and map their interactions. With
the help of the map, transaction can be detected and assessed. The map is subjected to structural and architectural
guidelines in order to recognize and understand the primary software defects. Following are the steps to perform
software risk analysis.
Risk Assessment
The purpose of the risk assessment is to identify and priorities the risks at the earliest stage and avoid losing time
and money.
Under risk assessment, you will go through:
• Risk identification: It is crucial to detect the type of risk as early as possible and address them. The risk types
are classified into
o Estimation risks: related to estimates of the resources required to build the software
o Technology risks: are related to the usage of hardware or software technologies required to build the
software
o Organizational risks: are related to the organizational environment where the software is being
created.
• Risk analysis: Experienced developers analyze the identified risk based on their experience gained from
previous software . In the next phase, the Software Development team estimates the probability of the risk
occurring and its seriousness
• Risk prioritization: The risk priority can be identified using the formula below
p=r*s
Where,
After identifying the risks, the ones with the probability of becoming true and higher loss must be prioritized and
controlled.
Risk control
Risk control is performed to manage the risks and obtain desired results. Once identified, the risks can be classified
into the most and least harmful.
In software engineering, understanding the concepts of requirement, requirement modeling, and requirement
specification (SRS) is fundamental to developing successful software systems. Let’s explore these concepts:
1. Requirement
A requirement is a description of a feature or functionality that a system must provide or a condition it must satisfy
to fulfill the stakeholders' needs. Requirements are the foundation of any software development process. They are
classified into two types:
• Functional Requirements: These define the specific behavior or functions the system must perform. For
example, "The system must allow users to log in with a username and password."
• Non-Functional Requirements: These define the system’s operational characteristics or constraints, like
performance, security, reliability, and scalability. For example, "The system should handle 1,000 users
concurrently without performance degradation."
• User requirements: High-level requirements that describe what the end-users expect.
• System requirements: Detailed requirements derived from user requirements, which outline how the system
should function internally.
2. Requirement Modeling
Requirement modeling is the process of representing the system's requirements using diagrams, flowcharts, models,
or structured text. The goal is to understand, analyze, and communicate the requirements effectively to both
technical and non-technical stakeholders. It acts as a bridge between the conceptualization of the system and its
actual design and implementation.
• Data Flow Diagrams (DFD): Depict how data moves through the system and how inputs are transformed into
outputs.
• Entity-Relationship Diagrams (ERD): Used to model data and its relationships within the system.
• Class Diagrams: Depict objects, their attributes, and the relationships between them in object-oriented
systems.
The primary purpose of requirement modeling is to ensure that requirements are clear, unambiguous, and complete
before moving into the design phase. It also helps in discovering inconsistencies and gaps in the requirements.
The Software Requirement Specification (SRS) is a formal document that outlines all the functional and non-
functional requirements of the system. It serves as a reference for developers, testers, project managers, and
stakeholders throughout the software development lifecycle. The SRS is typically created during the early stages of
the software development process and forms the foundation for designing, developing, and validating the system.
1. Introduction:
2. Overall Description:
o Product perspective (how the system fits into existing workflows or systems).
3. Functional Requirements:
o Each requirement is numbered and described in detail (e.g., “The system shall allow users to reset
their passwords”).
4. Non-Functional Requirements:
o These define how well the system performs tasks, such as response time or user interface design.
6. System Models:
o Diagrams like use cases, data flow, and class diagrams that help describe the requirements in a visual
format.
Importance of SRS:
• Clarity: It removes ambiguity by clearly defining what is required from the system.
• Agreement: Ensures that all stakeholders have the same understanding of the system’s functionality.
• Baseline: It serves as a reference point for future development phases, such as design, coding, and testing.
• Testing: The SRS provides a basis for developing test cases to validate the system against the requirements.
1. Requirement Gathering: Understanding the user needs through interviews, surveys, and stakeholder
meetings.
2. Requirement Analysis: Refining and analyzing requirements to ensure they are feasible, consistent, and
aligned with business goals.
4. Requirement Specification: Writing the SRS to formally document all the requirements for the project.
Conclusion
Understanding the requirement, modeling it properly, and documenting it through a formal SRS are critical steps in
the software development process. These ensure that the project proceeds with a clear understanding of what needs
to be built, reducing risks related to scope creep, misunderstandings, and rework later in the development process.
1. Requirement Elicitation
Requirement elicitation is the process of gathering information from stakeholders to understand what they expect
from a software system. It involves identifying, collecting, and articulating the system requirements through various
techniques. The goal is to ensure that the development team clearly understands the needs of the stakeholders.
• Interviews: Conducting one-on-one or group interviews with stakeholders to understand their needs.
• Surveys/Questionnaires: Using structured forms to gather information from a large number of users or
stakeholders.
• Workshops: Collaborative meetings where stakeholders and developers discuss and brainstorm
requirements.
• Brainstorming: A group session where ideas for the system's requirements are freely suggested.
• Prototyping: Creating an early version of the system for users to interact with and provide feedback.
• Observation: Watching how users perform their tasks in their current systems or environments to gather
insights.
• Document Analysis: Reviewing existing documentation related to the business process, legacy systems, or
business rules.
2. Requirement Analysis
Requirement analysis is the process of refining, clarifying, and organizing the gathered requirements into a structured
format. It involves breaking down high-level requirements into more detailed and clear specifications, ensuring that
they are complete, consistent, and feasible within the project's scope and constraints. The goal of requirement
analysis is to ensure that the software being developed will meet the business goals and the needs of the end-users.
• Classification and Prioritization: Grouping requirements into categories (e.g., functional vs. non-functional)
and prioritizing them based on importance and project constraints.
• Feasibility Study: Determining whether the proposed requirements are technically, financially, and legally
viable.
• Consistency Check: Ensuring that there are no conflicting or redundant requirements and that all the
requirements align with the business goals.
• Documentation: Preparing structured documentation such as use cases, process models, and scenarios to
communicate requirements to both technical and non-technical stakeholders.
• Conclusion
• Requirement Elicitation helps in understanding what stakeholders expect from the system, while
Requirement Analysis ensures that those expectations are viable and clear. Together, they form the
foundation for developing high-quality software that meets users' needs effectively.
• Requirement Engineering
Requirements engineering (RE) refers to the process of defining, documenting, and maintaining requirements in the
engineering design process. Requirement engineering provides the appropriate mechanism to understand what the
customer desires, analyzing the need, and assessing feasibility, negotiating a reasonable solution, specifying the
solution clearly, validating the specifications and managing the requirements as they are transformed into a working
system. Thus, requirement engineering is the disciplined application of proven principles, methods, tools, and
notation to describe a proposed system's intended behavior and its associated constraints.
1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for developing the software that is acceptable to
users, flexible to change and conformable to established standards.
Types of Feasibility:
1. Technical Feasibility - Technical feasibility evaluates the current technologies, which are needed to
accomplish customer requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the required software performs a
series of levels to solve business problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software can generate financial
profits for an organization.
Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem, divide the problems
and conquer the problem it means to divide the problem into smaller pieces so that each piece can be captured
separately.
For software design, the goal is to divide the problem into manageable pieces.
These pieces cannot be entirely independent of each other as they together form the system. They have to cooperate
and communicate to solve the problem. This communication adds complexity.
• Architectural Design
Architectural Design
For the program to represent software design, architectural design is required. "The process of defining a
collection of hardware and software components and their interfaces to establish the framework for the
development of a computer system" is how the IEEE defines architectural design. The following tasks are
carried out by an architectural design. One of these numerous architectural styles can be seen in software
designed for computer-based systems.
Every style shall outline a system category made up of the following:
o A collection of parts (such as computing modules and databases) that together will carry out a task that the
system needs.
o
o The connector set will facilitate the parts' cooperation, coordination, and communication.
o Requirements that specify how parts can be combined to create a system.
o Semantic models aid in the designer's comprehension of the system's general characteristics.
Software requirements should be converted into an architecture that specifies the components and top-level
organization of the program. This is achieved through architectural design, also known as system design,
which serves as a "blueprint" for software development. Architectural design is "the process of defining a
collection of hardware and software components and their interfaces to establish the framework for
developing a computer system," according to the IEEE definition. The software requirements document is
examined to create this framework, and a methodology for supplying implementation details is designed.
The system's constituent parts and their inputs, outputs, functions, and interplay are described using these
specifics.
1. It establishes an abstraction level where the designers can specify the system's functional and performance
behavior.
2. By outlining the aspects of the system that are easily modifiable without compromising its integrity, it serves
as a guide for improving the system as necessary.
3. It assesses every top-tier design.
4. It creates and records the high-level interface designs, both internal and external.
5. It creates draft copies of the documentation for users.
6. It outlines and records the software integration timetable and the initial test requirements.
7. The following is a list of the sources for architectural design.
8. Information on the software development project's application domain
9. Making use of data flow charts
10. The accessibility of architectural patterns and styles.
Architectural design is paramount in software engineering, where fundamental requirements like
dependability, cost, and performance are addressed. As the paradigm for software engineering shifts away
from monolithic, standalone, built-from-scratch systems and toward componentized, evolvable, standards-
based, and product line-oriented systems, this task is challenging. Knowing precisely how to move from
requirements to architectural design is another significant challenge for designers. Designers use reusability,
componentization, platform-based, standards-based, and more to avoid these issues.
Even though developers are in charge of the architectural design, others like user representatives, systems
engineers, hardware engineers, and operations staff are also involved. All stakeholders must be consulted
when reviewing the architectural design to reduce risks and errors.
Components of Architectural Design
High-level organizational structures and connections between system components are established during
architectural design's crucial software engineering phase. It is the framework for the entire software project
and greatly impacts the system's effectiveness, maintainability, and quality. The following are some essential
components of software engineering's architectural design:
o System Organization: The architectural design defines how the system will be organized into various
components or modules. This includes identifying the major subsystems, their responsibilities, and how they
interact.
o Abstraction and Decomposition: Architectural design involves breaking down the system into smaller,
manageable parts. This decomposition simplifies the development process and makes understanding and
maintaining the system easier.
o Design Patterns: Using design patterns, such as Singleton, Factory, or Model-View-Controller (MVC), can help
standardize and optimize the design process by providing proven solutions to common architectural
problems.
o Architectural Styles: There are various architectural styles, such as layered architecture, client-server
architecture, microservices architecture, and more. Choosing the right style depends on the specific
requirements of the software project.
o Data Management: Architectural design also addresses how data will be stored, retrieved, and managed
within the system. This includes selecting the appropriate database systems and defining data access
patterns.
o Interaction and Communication: It is essential to plan how various parts or modules will talk to and interact
with one another. This includes specifying message formats, protocols, and APIs.
o Scalability: The architectural plan should consider the system's capacity for expansion and scalability.
Without extensive reengineering, it ought to be able to handle increased workloads or user demands.
o Security: The architectural design should consider security factors like access control, data encryption, and
authentication mechanisms.
o Optimization and performance: The architecture should be created to satisfy performance specifications.
This could entail choosing the appropriate technologies, optimizing algorithms, and effectively using
resources.
o Concerns with Cross-Cutting: To ensure consistency throughout the system, cross-cutting issues like logging,
error handling, and auditing should be addressed as part of the architectural design.
o Extensibility and Flexibility: A good architectural plan should be adaptable and extensible to make future
changes and additions without seriously disrupting the existing structure.
o Communication and Documentation: The development team and other stakeholders must have access to
clear documentation of the architectural design to comprehend the system's structure and design choices.
o Validation and Testing: Plans for how to test and validate the system's components and interactions should
be included in the architectural design.
o Maintainability: Long-term maintenance of the design requires considering factors like code organization,
naming conventions, and modularity.
o Cost factors to consider: The project budget and resource limitations should be considered when designing
the architecture.
The architectural design phase is crucial in software development because it establishes the system's overall
structure and impacts decisions made throughout the development lifecycle. A software system that meets
the needs of users and stakeholders can be more efficient, scalable, and maintainable thanks to a well-
thought-out architectural design. It also gives programmers a foundation to build the system's code.
Properties of Architectural Design
Several significant traits and qualities of architectural design in software engineering are used to direct the
creation of efficient and maintainable software systems. A robust and scalable architecture must have these
characteristics. Some of the essential characteristics of architectural design in software engineering are as
follows:
o Modularity:
Architectural design encourages modularity by dividing the software system into smaller, self-contained
modules or components. Because each module has a clear purpose and interface, modularity makes the
system simpler to comprehend, develop, test, and maintain.
o Scalability:
Scalability should be supported by a well-designed architecture, enabling the system to handle increased
workloads and growth without extensive redesign. Techniques like load balancing, distributed systems, and
component replication can be used to achieve scalability.
o Maintainability:
A software system's architectural design aims to make it maintainable over time. This entails structuring the
system to support quick updates, improvements, and bug fixes. Maintainability is facilitated by clear
documentation and adherence to coding standards.
o Flexibility:
The flexibility of architectural design should allow for easy adaptation to shifting needs. It should enable the
addition or modification of features without impairing the functionality of the current features. Design
patterns and clearly defined interfaces are frequently used to accomplish this.
o Reliability:
A strong architectural plan improves the software system's dependability. It should reduce the likelihood of
data loss, crashes, and system failures. Redundancy and error-handling procedures can improve reliability.
o Performance:
A crucial aspect of architectural design is performance. It entails fine-tuning the system to meet performance
standards, including throughput, response time, and resource utilization. Design choices like data storage
methods and algorithm selection greatly influence performance.
o Security:
Architectural design must take security seriously. The architecture should include security measures such as
access controls, encryption, authentication, and authorization to safeguard the system from potential threats
and vulnerabilities.
o Distinguishing Concerns:
By enforcing a clear separation of concerns, architectural design ensures that various system components-
such as the user interface, business logic, and data storage-are arranged and managed independently. The
separation makes maintenance, testing, and development easier.
o Usability:
The system's usability and user experience should be considered when making architectural decisions. User
interfaces and workflows must be designed to ensure users can interact with the software effectively and
efficiently.
o Documentation:
Architectural design that works is extensively documented. Developers and other stakeholders can refer to
the documentation, which explains the design choices, components, and reasoning behind them. It improves
understanding and communication.
o Price-Performance:
The architectural plan should take the project's resources and budget into consideration. It entails choosing
technologies, resources, and development initiatives wisely and economically.
o Validation and Testing:
The architectural design should include plans for evaluating and verifying the interactions and parts of the
system. This guarantees that the system meets the requirements and operates as intended.
1. Structure and Clarity: The organization of the software system is represented in a clear and organized
manner by architectural design. It outlines the elements, their connections, and their duties. This clarity
makes it easier for developers to comprehend how various system components work together and
contribute to their functionality. Comprehending this concept is essential for effective development and
troubleshooting.
2. Modularity: In architectural design, modularity divides a system into more manageable, independent
modules or components. Because each module serves a distinct purpose, managing, testing, and maintaining
it is made simpler. Developers can work on individual modules independently, improving teamwork and
lessening the possibility of unexpected consequences from changes.
3. Scalability: A system's scalability refers to its capacity to accommodate growing workloads and expand over
time. Thanks to an architectural design that supports scalability, the system can accommodate more users,
data, and transactions without requiring a major redesign. Systems that must adjust to shifting user needs
and business requirements must have this.
4. Maintenance and Expandability: The extensibility and maintenance of software are enhanced by
architectural design. Upgrades, feature additions, and bug fixes can be completed quickly and effectively with
an organized architecture. It lowers the possibility of introducing new problems during maintenance, which
can greatly benefit software systems that last a long time.
5. Performance Optimization: Performance optimization ensures the system meets parameters like response
times and resource usage. Architectural design allows choosing effective algorithms, data storage plans, and
other performance-boosting measures to create a responsive and effective system.
6. Security: An essential component of architectural design is security. Access controls, encryption, and
authentication are a few security features that can be incorporated into the architecture to protect sensitive
data and fend off attacks and vulnerabilities. A secure system starts with a well-designed architecture.
7. Reliability: When a system is reliable, it operates as planned and experiences no unplanned malfunctions. By
structuring the system to handle errors and recover gracefully from faults, architectural design helps
minimize failures. Moreover, it makes it possible to employ fault-tolerant and redundancy techniques to raise
system reliability.
Types of Components
• UI Components
o User Interface components provide an easy and more convenient way to encapsulate logic by
combining presentational and visible elements such as buttons, forms, and widgets.
• Service Components
o Service components are the base of business logic or application services, in which they serve as the
platform for activities such as data processing, authentication, and communication with external
systems.
• Data Components
o Through data abstraction and provision of interfaces for data access, data components take care of
database interaction issues and provide data structures for querying, updating, and saving data.
• Infrastructure Components
o The hardware elements regard as fundamental services or resources like logging, caching, security
and communication protocols which a software system depends on.
In software engineering, Web Application Design refers to the process of planning, conceptualizing, and
structuring the interface, architecture, and user interactions of a web-based application. A well-designed
web application ensures that it is user-friendly, efficient, scalable, secure, and responsive across different
devices. Web application design encompasses both front-end and back-end aspects, and involves several
phases and key principles.
Key Components of Web Application Design
1. User Interface (UI) Design
o Focuses on the layout and appearance of the web application.
o Ensures that the interface is intuitive, aesthetically pleasing, and aligned with the application's
purpose.
o Uses elements like buttons, forms, navigation menus, and other controls to allow users to interact
with the application.
2. User Experience (UX) Design
o Ensures that the web application is user-centered, providing an easy, satisfying, and enjoyable
experience.
o Emphasizes the ease of navigation, accessibility, and the clarity of content.
o UX design involves creating wireframes, user journeys, and interaction flows to ensure seamless user
interaction.
3. Front-End Development
o Deals with the client side of the application, which includes everything that the user interacts with
directly.
o Technologies used include:
▪ HTML (HyperText Markup Language): For structuring the content.
▪ CSS (Cascading Style Sheets): For styling and visual design.
▪ JavaScript: For dynamic interactions and functionality, like form validation, real-time
updates, etc.
o Front-end frameworks and libraries like React.js, Angular, and Vue.js are often used to speed up
development.
4. Back-End Development
o Focuses on the server side, handling data processing, business logic, and database management.
o Technologies used include:
▪ Server-Side Languages: Such as Node.js, Python, Ruby on Rails, PHP, Java, or ASP.NET.
▪ Databases: To store, retrieve, and manage data. Common databases include MySQL,
PostgreSQL, MongoDB, and SQL Server.
o Ensures proper communication between the front-end and back-end through APIs (Application
Programming Interfaces).
5. Database Design
o Involves designing the database structure to store and manage data efficiently.
o Ensures that the database is normalized, and relationships between tables (entities) are well defined.
o Relational databases (like MySQL) and NoSQL databases (like MongoDB) are commonly used
depending on the type and scale of data.
6. Architecture Design
o The architecture of a web application defines how its components and services are organized and
interact with each other.
o Common architectural styles include:
▪ Monolithic Architecture: Where the application is built as a single, unified system.
▪ Microservices Architecture: Where the application is broken into smaller, independently
deployable services.
▪ MVC (Model-View-Controller) Architecture: Separates the application logic into three
components—Model (data), View (UI), and Controller (logic).
7. Security Design
o Ensures that the application is protected from various threats, such as data breaches, unauthorized
access, and cyberattacks.
o Key practices include:
▪ Authentication (verifying the identity of users) and authorization (ensuring that users have
permissions to access certain resources).
▪ Data Encryption (using SSL/TLS for secure data transmission).
▪ Input Validation (to prevent security vulnerabilities like SQL injection and Cross-Site Scripting
(XSS)).
8. Performance Optimization
o A critical part of web application design, focusing on making the application fast and responsive.
o Techniques include:
▪ Caching: Storing frequently accessed data temporarily to reduce load times.
▪ Content Delivery Networks (CDNs): Distributing static content across servers globally to
improve access times.
▪ Database Optimization: Ensuring queries are efficient and indexing is used properly.
9. Responsive Design
o Ensures that the web application works across a wide range of devices (desktops, tablets,
smartphones) and screen sizes.
o Achieved using CSS media queries and responsive frameworks like Bootstrap or Foundation.
10. API Design
o If the web application needs to communicate with other applications, services, or systems, API
design becomes crucial.
o Common choices are RESTful APIs and GraphQL.
o Ensures secure, scalable, and efficient interaction between client and server.
Phases of Web Application Design
1. Requirements Gathering
o Understanding the needs of the stakeholders, users, and business goals.
o Requirements may include functional requirements (features) and non-functional requirements
(performance, security, etc.).
2. Conceptual Design
o Sketching the layout and structure of the web application.
o Involves creating user personas, use case scenarios, wireframes, and prototypes to visualize the flow
and interface.
o Tools like Figma, Sketch, and Adobe XD are commonly used for creating wireframes and UI/UX
design prototypes.
3. Designing the Architecture
o Designing how different parts of the system will communicate.
o This includes database schema design, API design, and defining the interaction between the front-
end, back-end, and databases.
4. Development
o Implementing both the front-end and back-end of the web application based on the architecture.
o During development, code management tools like Git and GitHub are used to manage versions and
collaborate on the project.
5. Testing
o Testing the web application for functionality, usability, performance, security, and compatibility
across different browsers and devices.
o Types of testing include:
▪ Unit Testing: Testing individual units or components.
▪ Integration Testing: Ensuring components work well together.
▪ End-to-End Testing: Testing the entire application from start to finish.
▪ Load Testing: Testing how the application performs under heavy load.
▪ Cross-browser Testing: Ensuring compatibility across different web browsers.
6. Deployment
o Deploying the web application to a web server, making it accessible over the internet.
o Common hosting services include AWS, Google Cloud Platform (GCP), Microsoft Azure, or Heroku.
7. Maintenance and Updates
o After deployment, the web application requires continuous monitoring, bug fixing, and updates to
improve functionality or security.
Web Application Design Architecture (Diagram)
A typical web application architecture might look something like this:
sql
कोड कॉपी करें
+-----------------------+
| Client (Browser) |
+-----------------------+
|
v
+-----------------------------+
| Front-End (UI) |
| (HTML, CSS, JavaScript) |
+-----------------------------+
|
v
+-----------------------+
| Web Server (API) |
| (Node.js, Python, etc.) |
+-----------------------+
|
v
+-----------------------------+
| Database |
| (MySQL, MongoDB, etc.) |
+-----------------------------+
Best Practices in Web Application Design
1. Keep it simple: Simplicity enhances usability and maintainability.
2. Prioritize security: Ensure the application is designed with security in mind.
3. Use modern frameworks: Leverage front-end and back-end frameworks for fast and scalable development.
4. Optimize for performance: Use caching, CDNs, and lazy loading to improve load times.
5. Ensure responsive design: The application must work well on both desktop and mobile devices.
Conclusion
Web Application Design is a multidisciplinary process that requires collaboration between UI/UX designers,
front-end developers, back-end developers, and database engineers. A successful web application is one that
is responsive, secure, scalable, and meets both user and business needs effectively. Following a structured
design approach ensures that the application is reliable, user-friendly, and able to evolve as user needs and
technology change.
Unit - 3 Software Coding & Testing Coding Standard and coding Guidelines, Code
Review, Software Documentation, Testing Strategies, Testing Techniques and Test
Case, Test Suites Design, Testing Conventional Applications, Testing Object Oriented
Applications, Testing Web and Mobile Applications, Testing Tools (Win runner, Load
runner). Quality Concepts and Software Quality Assurance, Software Reviews
(Formal Technical Reviews), Software Reliability, The Quality Standards: ISO 9000,
CMM, Six Sigma for SE, SQA Plan
Software testing provides an independent view and objective of the software and gives surety of fitness of
the software. It involves testing of all components under the required services to confirm that whether it is
satisfying the specified requirements or not. The process is also providing the client with information about
the quality of the software.
Testing is mandatory because it will be a dangerous situation if the software fails any of time due to lack of
testing. So, without testing software cannot be deployed to the end user.
What is Testing
Testing is a group of techniques to determine the correctness of the application under the predefined script
but, testing cannot find all the defect of application. The main intent of testing is to detect failures of the
application so that failures can be discovered and corrected. It does not demonstrate that a product
functions properly under all conditions but only that it is not working in some specific conditions.
Testing furnishes comparison that compares the behavior and state of software against mechanisms because
the problem can be recognized by the mechanism. The mechanism may include past versions of the same
specified product, comparable products, and interfaces of expected purpose, relevant standards, or other
criteria but not limited up to these.
Testing includes an examination of code and also the execution of code in various environments, conditions
as well as all the examining aspects of the code. In the current scenario of software development, a testing
team may be separate from the development team so that Information derived from testing can be used to
correct the process of software development.
The success of software depends upon acceptance of its targeted audience, easy graphical user interface,
strong functionality load test, etc. For example, the audience of banking is totally different from the audience
of a video game. Therefore, when an organization develops a software product, it can assess whether the
software product will be beneficial to its purchasers and other audience.
Type of Software testing
We have various types of testing available in the market, which are used to test the application or the
software.
With the help of below image, we can easily understand the type of software testing:
Manual testing
The process of checking the functionality of an application as per the customer needs without taking any
help of automation tools is known as manual testing. While performing the manual testing on any
application, we do not need any specific knowledge of any testing tool, rather than have a proper
understanding of the product so we can easily prepare the test document.
Manual testing can be further divided into three types of testing, which are as follows:
o White box testing
o Black box testing
o Gray box testing
For more information about manual testing, refers to the below link:
https://www.javatpoint.com/manual-testing
Automation testing
Automation testing is a process of converting any manual test cases into the test scripts with the help of
automation tools, or any programming language is known as automation testing. With the help of
automation testing, we can enhance the speed of our test execution because here, we do not require any
human efforts. We need to write a test script and execute those scripts.
For more information about manual testing, refers to the below link:
https://www.javatpoint.com/automation-testing
Prerequisite
Before learning software testing, you should have basic knowledge of basic computer functionality, basic
mathematics, computer language, and logical operators.
Audience
Our software testing tutorial is designed for beginners and professionals.
Problems
We assure that you will not find any problem in this Software Testing Tutorial. But if there is any mistake,
please post the problem in contact form.
• Code Review
Code reviews are like the quality control checkpoint in software engineering. Before code gets merged into th
e main project, another set of eyes looks it over. The goal is to catch bugs, ensure code quality, and share kno
wledge among team members.
Why it's essential:
1. Catch Errors Early: Identifies issues before they become major problems.
2. Improve Code Quality: Encourages best practices and adherence to coding standards.
3. Knowledge Sharing: Helps team members learn from each other and understand different parts of the codeb
ase.
4. Collaborative Culture: Promotes teamwork and collaboration.
The process usually involves developers submitting their code changes (pull requests) and reviewers examini
ng the code, providing feedback, and suggesting improvements. Once the code meets the required standards
, it gets approved and merged.
• Software Documentation
Software documentation is the unsung hero of the development process. It's what makes code understandab
le, maintainable, and usable long after it's been written.
There are a few key types:
1. Code Documentation: Inline comments and descriptions that explain what specific blocks of code do. Think o
f it as a guide for anyone who dives into the code later.
2. Technical Documentation: Detailed explanations of how the system works, including architecture diagrams,
API references, and database schema.
3. User Documentation: Manuals, guides, and help files that explain how to use the software. This is often aime
d at end users or customers.
Good documentation ensures that everyone from developers to end-
users can understand and use the software effectively. It's like the breadcrumbs that keep everyone from gett
ing lost in the forest of code.
• Testing Strategies
Testing strategies are critical to ensuring that software performs as expected and is free from defects. They c
over a range of approaches to validate everything from individual units of code to the system as a whole.
1. Unit Testing: Testing individual components or functions in isolation. It's like zooming in on a single piece of t
he puzzle to make sure it fits perfectly.
2. Integration Testing: Ensuring that different components or systems work together as expected. Think of it as
checking that puzzle pieces fit together seamlessly.
3. System Testing: Testing the complete and integrated software to verify it meets the requirements. It's like loo
king at the entire puzzle to ensure it forms the correct picture.
4. Acceptance Testing: Validating the software against user requirements and ensuring it provides the intended
value. This is like having someone who ordered the puzzle verify that it's the one they wanted.
5. Performance Testing: Assessing how the software performs under various conditions, such as load, stress, an
d scalability. It's like making sure the puzzle can withstand a bit of rough handling and still look good.
6. Security Testing: Identifying vulnerabilities and ensuring the software is secure against potential threats. It's li
ke checking that the puzzle has no missing pieces or flaws that could compromise its integrity.
Testing techniques and creating effective test cases are the bread and butter of ensuring software quality.
Testing Techniques:
1. Black Box Testing: Testing without looking at the internal code. Focus is on input and output. It's like testing a
car by driving it, without peeking under the hood.
2. White Box Testing: Testing with full knowledge of the internal code. It's like diving deep into the engine of th
e car to check every component.
3. Grey Box Testing: A mix of both, where some knowledge of the internal workings is available. It's like knowin
g the car's design but focusing mainly on its performance.
4. Exploratory Testing: No predefined cases, testers explore the software to find bugs. Think of it as freestyle dri
ving to find unexpected issues.
5. Regression Testing: Ensuring new code changes don’t break existing functionality. It's like rechecking the car'
s performance after adding a new part.
Test Case: A test case is a set of conditions or variables used to determine whether a system meets requirem
ents and works correctly. Good test cases are specific, repeatable, and cover both positive and negative scen
arios.
Key components of a test case include:
• Test Case ID: Unique identifier.
• Description: A brief summary of what’s being tested.
• Preconditions: Any setup needed before executing the test.
• Test Steps: Step-by-step instructions to execute the test.
• Expected Result: What should happen if everything works correctly.
• Actual Result: What actually happens when the test is executed.
• Status: Pass or fail based on whether the actual result matches the expected result.
Software testing enables verification of every aspect and feature of the software. This often leads to the
development of a large number of test cases. As the count of test cases increases, they are mismanaged, and
end up becoming unorganized. A software test suite prevents such a situation from occurring.
What is a Software Test Suite?
A test suite is a methodical arrangement of test cases that are developed to validate specific functionalities.
Individual test cases in a suite are created to verify a particular functionality or performance goal. All the test
cases in a test suite ultimately used to verify the quality, and dependability of a software.
----------------------------------------------------------------------------------------------------------------
You've just hit the motherlode of software testing and quality assurance! Let's break this down.
Testing Object-Oriented Applications
Testing object-oriented applications focuses on classes and objects, and involves:
• Unit Testing: Testing individual methods and classes.
• Integration Testing: Ensuring that different parts of the system work together.
• Polymorphism and Inheritance Tests: Making sure that derived classes work correctly when inherited metho
ds are called.
Testing Web and Mobile Applications
Web and mobile app testing ensures that the applications perform well across different platforms and device
s.
• Web Testing: Includes functionality, usability, compatibility, performance, and security testing.
• Mobile Testing: Covers various aspects like screen size, operating system, network conditions, and battery life
.
Testing Tools
• WinRunner: An automated functional GUI testing tool that allows the creation and execution of tests based o
n user actions.
• LoadRunner: A performance testing tool for examining system behavior and performance under load. It simu
lates multiple users accessing the application to identify and troubleshoot issues.
Quality Concepts and Software Quality Assurance (SQA)
• Quality Concepts: Focus on preventing defects by ensuring processes are followed.
• SQA: Involves systematic activities to ensure the software meets the required quality standards. This includes
audits, process standards, and testing strategies.
Software Reviews (Formal Technical Reviews)
Formal technical reviews (FTRs) are structured processes involving team members examining the software pr
oduct to identify defects. These are planned, documented, and typically follow a strict protocol.
Software Reliability
Software reliability is about ensuring that software performs correctly under specified conditions over time. I
t involves:
• Fault Tolerance: Ability to continue operation despite faults.
• Availability: Ensuring the system is operational when needed.
• Recovery: Ability to recover from failures quickly.
• Quality Standards: ISO 9000, CMM, Six Sigma for SE, SQA Plan
ISO 9000 Certification
ISO (International Standards Organization) is a group or consortium of 63 countries established to plan and
fosters standardization. ISO declared its 9000 series of standards in 1987. It serves as a reference for the
contract between independent parties. The ISO 9000 standard determines the guidelines for maintaining a
quality system. The ISO standard mainly addresses operational methods and organizational methods such as
responsibilities, reporting, etc. ISO 9000 defines a set of guidelines for the production process and is not
directly concerned about the product itself.
Types of ISO 9000 Quality Standards
The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for production,
then good quality products are bound to follow automatically. The types of industries to which the various
ISO standards apply are as follows.
1. ISO 9001: This standard applies to the organizations engaged in design, development, production, and
servicing of goods. This is the standard that applies to most software development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but are only involved in
the production. Examples of these category industries contain steel and car manufacturing industries that
buy the product and plants designs from external sources and are engaged in only manufacturing those
products. Therefore, ISO 9002 does not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation and testing of the
products. For example, Gas companies.
How to get ISO 9000 Certification?
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for registration. The
process consists of the following stages:
1. Application: Once an organization decided to go for ISO certification, it applies to the registrar for
registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the document submitted
by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has compiled the
suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.
• CMM
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no processes are
described and followed. Since software production processes are not limited, different engineers follow their
process and as a result, development efforts become chaotic. Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are established.
Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and documented.
There is a common organization-wide understanding of operations, roles, and responsibilities. The ways
through defined, the process and product qualities are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size, reliability, time
complexity, understandability, etc.
• Six Sigma
Six Sigma is the process of improving the quality of the output by identifying and eliminating the cause of
defects and reduce variability in manufacturing and business processes. The maturity of a manufacturing
process can be defined by a sigma rating indicating its percentage of defect-free products it creates. A six
sigma method is one in which 99.99966% of all the opportunities to produce some features of a component
are statistically expected to be free of defects (3.4 defective features per million opportunities).
DMAIC
It specifies a data-driven quality strategy for improving processes. This methodology is used to enhance an
existing business process.
The DMAIC project methodology has five phases:
1. Define: It covers the process mapping and flow-charting, project charter development, problem-solving
tools, and so-called 7-M tools.
2. Measure: It includes the principles of measurement, continuous and discrete data, and scales of
measurement, an overview of the principle of variations and repeatability and reproducibility (RR) studies for
continuous and discrete data.
3. Analyze: It covers establishing a process baseline, how to determine process improvement goals, knowledge
discovery, including descriptive and exploratory data analysis and data mining tools, the basic principle of
Statistical Process Control (SPC), specialized control charts, process capability analysis, correlation and
regression analysis, analysis of categorical data, and non-parametric statistical methods.
4. Improve: It covers project management, risk assessment, process simulation, and design of experiments
(DOE), robust design concepts, and process optimization.
5. Control: It covers process control planning, using SPC for operational control and PRE-Control.
DMADV
It specifies a data-driven quality strategy for designing products and processes. This method is used to create
new product designs or process designs in such a way that it results in a more predictable, mature, and
detect free performance.
• SQA Plan
Quality of Design: Quality of Design refers to the characteristics that designers specify for an item. The grade
of materials, tolerances, and performance specifications that all contribute to the quality of design.
Quality of conformance: Quality of conformance is the degree to which the design specifications are
followed during manufacturing. Greater the degree of conformance, the higher is the level of quality of
conformance.
Software Quality: Software Quality is defined as the conformance to explicitly state functional and
performance requirements, explicitly documented development standards, and inherent characteristics that
are expected of all professionally developed software.
Quality Control: Quality Control involves a series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements place upon it. Quality control
includes a feedback loop to the process that created the work product.
Quality Assurance: Quality Assurance is the preventive set of activities that provide greater confidence that
the project will be completed successfully.
Quality Assurance focuses on how the engineering and management activity will be done?
As anyone is interested in the quality of the final product, it should be assured that we are building the right
product.
It can be assured only when we do inspection & review of intermediate products, if there are any bugs, then
it is debugged. This quality can be enhanced.
Importance of Quality
We would expect the quality to be a concern of all producers of goods and services. However, the distinctive
characteristics of software and in particular its intangibility and complexity, make special demands.
Increasing criticality of software: The final customer or user is naturally concerned about the general quality
of software, especially its reliability. This is increasing in the case as organizations become more dependent
on their computer systems and software is used more and more in safety-critical areas. For example, to
control aircraft.
The intangibility of software: This makes it challenging to know that a particular task in a project has been
completed satisfactorily. The results of these tasks can be made tangible by demanding that the developers
produce 'deliverables' that can be examined for quality.
Accumulating errors during software development: As computer system development is made up of several
steps where the output from one level is input to the next, the errors in the earlier ?deliverables? will be
added to those in the later stages leading to accumulated determinable effects. In general the later in a
project that an error is found, the more expensive it will be to fix. In addition, because the number of errors
in the system is unknown, the debugging phases of a project are particularly challenging to control.
Software Quality Assurance
Software quality assurance is a planned and systematic plan of all actions necessary to provide adequate
confidence that an item or product conforms to establish technical requirements.
A set of activities designed to calculate the process by which the products are developed or manufactured.
SQA Encompasses
o A quality management approach
o Effective Software engineering technology (methods and tools)
o Formal technical reviews that are tested throughout the software process
o A multitier testing strategy
o Control of software documentation and the changes made to it.
o A procedure to ensure compliances with software development standards
o Measuring and reporting mechanisms.
Unit - 4 Software Maintenance and Configuration Management Types of Software
Maintenance,The SCM Process,Identification of Objects in the Software
Configuration, DevOps: Overview, Problem Case Definition, Benefits of Fixing
Application Development Challenges, DevOps Adoption Approach through
Assessment, Solution Dimensions, What is DevOps?, DevOps Importance and
Benefits, DevOps Principles and Practices, 7 C’s of DevOps Lifecycle for Business
Agility, DevOps and Continuous Testing, How to Choose Right DevOps Tools,
Challenges with DevOps Implementation, Must Do Things for DevOps, Mapping My
App to DevOps –
-------------------------------------------------------------------------------------------------
What is DevOps?
If you want to build better software faster, DevOps is the answer. Here’s how this software development
methodology brings everyone to the table to create secure code quickly.
DevOps defined
DevOps combines development (Dev) and operations (Ops) to increase the efficiency, speed, and security of
software development and delivery compared to traditional processes. A more nimble software development
lifecycle results in a competitive advantage for businesses and their customers.
DevOps explained
DevOps can be best explained as people working together to conceive, build and deliver secure software at
top speed. DevOps practices enable software development (dev) and operations (ops) teams to accelerate
delivery through automation, collaboration, fast feedback, and iterative improvement.Stemming from
an Agile approach to software development, a DevOps process expands on the cross-functional approach of
building and shipping applications in a faster and more iterative manner.
In adopting a DevOps development process, you are making a decision to improve the flow and value
delivery of your application by encouraging a more collaborative environment at all stages of the
development cycle.DevOps represents a change in mindset for IT culture. In building on top of Agile, lean
practices, and systems theory, DevOps focuses on incremental development and rapid delivery of software.
Success relies on the ability to create a culture of accountability, improved collaboration, empathy, and joint
responsibility for business outcomes.
DevOps is a combination of software development (dev) and operations (ops). It is defined as a software
engineering methodology which aims to integrate the work of development teams and operations teams by
facilitating a culture of collaboration and shared responsibility.
DevOps methodology
The DevOps methodology aims to shorten the systems development lifecycle and provide continuous
delivery with high software quality. It emphasizes collaboration, automation, integration and rapid feedback
cycles. These characteristics help ensure a culture of building, testing, and releasing software that is more
reliable and at a high velocity.
This methodology comprises four key principles that guide the effectiveness and efficiency of application
development and deployment. These principles, listed below, center on the best aspects of modern software
development.
Core DevOps principles
1. Automation of the software development lifecycle. This includes automating testing, builds, releases, the
provisioning of development environments, and other manual tasks that can slow down or introduce human
error into the software delivery process.
2. Collaboration and communication. A good DevOps team has automation, but a great DevOps team also has
effective collaboration and communication.
3. Continuous improvement and minimization of waste. From automating repetitive tasks to watching
performance metrics for ways to reduce release times or mean-time-to-recovery, high performing DevOps
teams are regularly looking for areas that could be improved.
4. Hyperfocus on user needs with short feedback loops. Through automation, improved communication and
collaboration, and continuous improvement, DevOps teams can take a moment and focus on what real users
really want, and how to give it to them.
By adopting these principles, organizations can improve code quality, achieve a faster time to market, and
engage in better application planning.
The four phases of DevOps
The evolution of DevOps has unfolded across four distinct phases, each marked by shifts in technology and
organizational practices. This progression reflects the growing complexity within DevOps, driven primarily by
two key trends:
1. Transition to Microservices: As organizations shift from monolithic architectures to more
flexible microservices architectures, the demand for specialized DevOps tools has surged. This shift aims to
accommodate the increased granularity and agility offered by microservices.
2. Increase in Tool Integration: The proliferation of projects and the corresponding need for more DevOps tools
have led to a significant rise in the number of integrations between projects and tools. This complexity has
prompted organizations to rethink their approach to adopting and integrating DevOps tools.
The evolution of DevOps has unfolded through four distinct phases, each addressing the growing demands
and complexities of software development and delivery.
This four phases are as follows:
Phase 1: Bring Your Own DevOps (BYOD)
In the Bring Your Own DevOps phase, each team selected its own tools. This approach caused problems
when teams attempted to work together because they were not familiar with the tools of other teams. This
phase highlighted the need for a more unified toolset to facilitate smoother team integration and project
management.
Phase 2: Best-in-class DevOps
To address the challenges of using disparate tools, organizations moved to the second phase, Best-in-class
DevOps. In this phase, organizations standardized on the same set of tools, with one preferred tool for each
stage of the DevOps lifecycle. It helped teams collaborate with one another, but the problem then became
moving software changes through the tools for each stage.
Phase 3: Do-it-yourself (DIY) DevOps
To remedy this problem, organizations adopted do-it-yourself (DIY) DevOps, building on top of and between
their tools. They performed a lot of custom work to integrate their DevOps point solutions together.
However, since these tools were developed independently without integration in mind, they never fit quite
right. For many organizations, maintaining DIY DevOps was a significant effort and resulted in higher costs,
with engineers maintaining tooling integration rather than working on their core software product.
Phase 4: DevOps Platform
A single-application platform approach improves the team experience and business efficiency. A DevOps
platform replaces DIY DevOps, allowing visibility throughout and control over all stages of the DevOps
lifecycle.
By empowering all teams – Development, Operations, IT, Security, and Business – to collaboratively plan,
build, secure, and deploy software across an end-to-end unified system, a DevOps platform represents a
fundamental step-change in realizing the full potential of DevOps.
GitLab's DevOps platform is a single application powered by a cohesive user interface, agnostic of self-
managed or SaaS deployment. It is built on a single codebase with a unified data store, that allows
organizations to resolve the inefficiencies and vulnerabilities of an unreliable DIY toolchain.
How DevOps can benefit from AI and ML?
Artificial intelligence (AI) and machine learning (ML) are still maturing in their applications for DevOps, but
there is plenty for organizations to take advantage of today. They assist in analyzing test data, identifying
coding anomalies that could lead to bugs, as well as automating security and performance monitoring to
detect and proactively mitigate potential issues.
• AI and ML can find patterns, figure out the coding problems that cause bugs, and alert DevOps teams so they
can dig deeper.
• Similarly, DevOps teams can use AI and ML to sift through security data from logs and other tools to detect
breaches, attacks, and more. Once these issues are found, AI and ML can respond with automated mitigation
techniques and alerting.
• AI and ML can save developers and operations professionals time by learning how they work best, making
suggestions within workflows, and automatically provisioning preferred infrastructure configurations.
AI and ML excel in parsing vast amounts of test and security data, identifying patterns and coding anomalies
that could lead to potential bugs or breaches. This capability enables DevOps teams to proactively address
vulnerabilities and streamline alerting processes.
Read more about the benefits of AI and ML for DevOps
What is a DevOps platform?
DevOps brings the human silos together and a DevOps platform does the same thing for tools. Many teams
start their DevOps journey with a disparate collection of tools, all of which have to be maintained and many
of which don’t or can’t integrate. A DevOps platform brings tools together in a single application for
unparalleled collaboration, visibility, and development velocity.
A DevOps platform is how modern software should be created, secured, released, and monitored in a
repeatable fashion. A true DevOps platform means teams can iterate faster and innovate together because
everyone can contribute. This integrated approach is pivotal for organizations looking to navigate the
complexities of modern software development and realize the full potential of DevOps.
Benefits of a DevOps culture
The business value of DevOps and the benefits of a DevOps culture lies in the ability to improve the
production environment in order to deliver software faster with continuous improvement. You need the
ability to anticipate and respond to industry disruptors without delay. This becomes possible within an Agile
software development process where teams are empowered to be autonomous and deliver faster, reducing
work in progress. Once this occurs, teams are able to respond to demands at the speed of the market.
There are some fundamental concepts that need to be put into action in order for DevOps to function as
designed, including the need to:
• Remove institutionalized silos and handoffs that lead to roadblocks and constraints, particularly in instances
where the measurements of success for one team is in direct odds with another team’s key performance
indicators (KPIs).
• Implement a unified tool chain using a single application that allows multiple teams to share and collaborate.
This will enable teams to accelerate delivery and provide fast feedback to one another.
Key benefits:
Adopting a DevOps culture brings numerous benefits to an organization, notably in operational efficiency,
faster delivery of features, and improved product quality. Key advantages include:
Enhanced Collaboration: Breaking down silos between development and operations teams fosters a more
cohesive working environment, leading to better communication and collaboration.
Increased Efficiency: Automation of the software development lifecycle reduces manual tasks, minimizes
errors, and accelerates delivery times.
Continuous Improvement: DevOps encourages a culture of continuous feedback, allowing teams to quickly
adapt and make improvements, ensuring that the software meets user needs effectively.
Higher Quality and Security: With practices like continuous integration and delivery (CI/CD) and proactive
security measures, DevOps ensures that the software is not only developed faster but also maintains high
quality and security standards.
Faster Time to Market: By streamlining development processes and improving team collaboration,
organizations can reduce the overall time from conception to deployment, offering a competitive edge in
rapidly evolving markets.
What is the goal of DevOps?
DevOps represents a change in mindset for IT culture. In building on top of Agile practices, DevOps focuses
on incremental development and rapid delivery of software. Success relies on the ability to create a culture
of accountability, improved collaboration, empathy, and joint responsibility for business outcomes.
Adopting a DevOps strategy enables businesses to increase operational efficiencies, deliver better products
faster, and reduce security and compliance risk.
The DevOps lifecycle and how DevOps works
The DevOps lifecyle stretches from the beginning of software development through to delivery, maintenance,
and security. The stages of the DevOps lifecycle are:
Plan: Organize the work that needs to be done, prioritize it, and track its completion.
Create: Write, design, develop and securely manage code and project data with your team.
Verify: Ensure that your code works correctly and adheres to your quality standards — ideally with
automated testing.
Package: Package your applications and dependencies, manage containers, and build artifacts.
Secure: Check for vulnerabilities through static and dynamic tests, fuzz testing, and dependency scanning.
Release: Deploy the software to end users.
Configure: Manage and configure the infrastructure required to support your applications.
Monitor: Track performance metrics and errors to help reduce the severity and frequency of incidents.
Govern: Manage security vulnerabilities, policies, and compliance across your organization.
DevOps tools, concepts and fundamentals
DevOps covers a wide range of practices across the application lifecycle. Teams often start with one or more
of these practices in their journey to DevOps success.
Topic Description
Continuous The practice of regularly integrating all code changes into the main branch,
Integration (CI) automatically testing each change, and automatically kicking off a build.
Topic Description
A term for shifting security and testing much earlier in the development
Shift left process. Doing this can help speed up development while simultaneously
improving code quality.
Security has become an integral part of the software development lifecycle, with much of the security
shifting left in the development process. DevSecOps ensures that DevOps teams understand the security and
compliance requirements from the very beginning of application creation and can properly protect the
integrity of the software.
By integrating security seamlessly into DevOps workflows, organizations gain the visibility and control
necessary to meet complex security demands, including vulnerability reporting and auditing. Security teams
can ensure that policies are being enforced throughout development and deployment, including critical
testing phases.
DevSecOps can be implemented across an array of environments such as on-premises, cloud-native, and
hybrid, ensuring maximum control over the entire software development lifecycle.
How are DevOps and CI/CD related?
CI/CD — the combination of continuous integration and continuous delivery — is an essential part of DevOps
and any modern software development practice. A purpose-built CI/CD platform can maximize development
time by improving an organization’s productivity, increasing efficiency, and streamlining workflows through
built-in automation, continuous testing, and collaboration.
As applications grow larger, the features of CI/CD can help decrease development complexity. Adopting other
DevOps practices — like shifting left on security and creating tighter feedback loops — helps break down
development silos, scale safely, and get the most out of CI/CD.
How does DevOps support the cloud-native approach?
Moving software development to the cloud has so many advantages that more and more companies are
adopting cloud-native computing. Building, testing, and deploying applications from the cloud saves money
because organizations can scale resources more easily, support faster software shipping, align with business
goals, and free up DevOps teams to innovate rather than maintain infrastructure.
Cloud-native application development enables developers and operations teams to work more
collaboratively, which results in better software delivered faster.
Read more about the benefits of cloud-native DevOps environments
What is a DevOps engineer?
A DevOps engineer is responsible for all aspects of the software development lifecycle, including
communicating critical information to the business and customers. Adhering to DevOps methodologies and
principles, they efficiently integrate development processes into workflows, introduce automation where
possible, and test and analyze code. They build, evaluate, deploy, and update tools and platforms (including
IT infrastructure if necessary). DevOps engineers manage releases, as well as identify and help resolve
technical issues for software users.
DevOps engineers require knowledge of a range of programming languages and a strong set of
communication skills to be able to collaborate among engineering and business groups.
Benefits of DevOps
Adopting DevOps breaks down barriers so that development and operations teams are no longer siloed and
have a more efficient way to work across the entire development and application lifecycle. Without DevOps,
organizations often experience handoff friction, which delays the delivery of software releases and negatively
impacts business results.
The DevOps model is an organization’s answer to increasing operational efficiency, accelerating delivery, and
innovating products. Organizations that have implemented a DevOps culture experience the benefits of
increased collaboration, fluid responsiveness, and shorter cycle times.
Collaboration
Adopting a DevOps model creates alignment between development and operations teams; handoff friction is
reduced and everyone is all in on the same goals and objectives.
Fluid responsiveness
More collaboration leads to real-time feedback and greater efficiency; changes and improvements can be
implemented quicker and guesswork is removed.
Shorter cycle time
Improved efficiency and frequent communication between teams shortens cycle time; new code can be
released more rapidly while maintaining quality and security.