Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Advanced Software Engg Notes 1-5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 174

SE4151 ADVANCED SOFTWARE ENGINEERING LTPC3003

COURSE OBJECTIVES:
To understand the rationale for software development process models
To understand why the architectural design of software is important;
To understand the five important dimensions of dependability, namely, availability, reliability,
safety, security, and resilience.
To understand the basic notions of a web service, web service standards, and service-oriented
architecture;
To understand the different stages of testing from testing during development of a software
system

UNIT I SOFTWARE PROCESS &MODELING


Prescriptive Process Models – Agility and Process – Scrum – XP – Kanban – DevOps – Prototype
Construction – Prototype Evaluation – Prototype Evolution – Modelling – Principles –
Requirements Engineering – Scenario-based Modelling – Class-based Modelling – Functional
Modelling – Behavioural Modelling.

UNIT II SOFTWARE DESIGN


Design Concepts – Design Model – Software Architecture – Architectural Styles – Architectural
Design – Component-Level Design – User Experience Design – Design for Mobility – Pattern-
Based Design.

UNIT III SYSTEM DEPENDABILITY AND SECURITY


Dependable Systems – Dependability Properties – Sociotechnical Systems – Redundancy and
Diversity – Dependable Processes – Formal Methods and Dependability – Reliability Engineering
– Availability and Reliability – Reliability Requirements – Fault-tolerant Architectures –
Programming for Reliability – Reliability Measurement – Safety Engineering – Safety-critical
Systems – Safety Requirements – Safety Engineering Processes – Safety Cases – Security
Engineering – Security and Dependability – Safety and Organizations – Security Requirements –
Secure System Design – Security Testing and Assurance – Resilience Engineering –
Cybersecurity – Sociotechnical Resilience – Resilient Systems Design.

UNIT IV SERVICE-ORIENTED SOFTWARE ENGINEERING, SYSTEMS ENGINEERING AND


REAL-TIME SOFTWARE ENGINEERING
Service-oriented Architecture – RESTful Services – Service Engineering – Service Composition
– Systems Engineering – Sociotechnical Systems – Conceptual Design – System Procurement –
System Development – System Operation and Evolution – Real-time Software Engineering –
Embedded System Design – Architectural Patterns for Real-time Software – Timing Analysis –
Real-time Operating Systems.

UNIT V SOFTWARE TESTING AND SOFTWARE CONFIGURATION MANAGEMENT


Software Testing Strategy – Unit Testing – Integration Testing – Validation Testing – System
Testing – Debugging – White-Box Testing – Basis Path Testing – Control Structure Testing –
Black-Box Testing – Software Configuration Management (SCM) – SCM Repository – SCM
Process – Configuration Management for Web and Mobile Apps.
SE4151 ADVANCED SOFTWARE ENGINEERING LTPC3003

UNIT I SOFTWARE PROCESS & MODELING

Prescriptive Process Models


The following framework activities are carried out irrespective of the process
model chosen by the organization.

1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment

The name 'prescriptive' is given because the model prescribes a set of activities,
actions, tasks, quality assurance and change the mechanism for every project.

There are three types of prescriptive process models. They are:

1. The Waterfall Model


2. Incremental Process model
3. RAD model

1. The Waterfall Model

• The waterfall model is also called as 'Linear sequential


model' or 'Classic life cycle model'.
• In this model, each phase is fully completed before the beginning of the
next phase.
• This model is used for small projects.
• In this model, feedback is taken after each phase to ensure that the project
is on the right path.
• Testing part starts only after the development is complete.
NOTE: The description of the phases of the waterfall model is same as that of
the process model.

An alternative design for 'linear sequential model' is as follows:

Advantages of waterfall model


• The waterfall model is simple and easy to understand, implement, and use.
• All the requirements are known at the beginning of the project; hence it is
easy to manage.
• It avoids overlapping of phases because each phase is completed at once.
• This model works for small projects because the requirements are
understood very well.
• This model is preferred for those projects where the quality is more
important as compared to the cost of the project.

Disadvantages of the waterfall model


• This model is not good for complex and object-oriented projects.
• It is a poor model for long projects.
• The problems with this model are uncovered, until the software testing.
• The amount of risk is high.

2. Incremental Process model

• The incremental model combines the elements of the waterfall model and
they are applied in an iterative fashion.
• The first increment in this model is generally a core product.
• Each increment builds the product and submits it to the customer for any
suggested modifications.
• The next increment implements the customer's suggestions and adds
additional requirements to the previous increment.
• This process is repeated until the product is finished.
For example, the word-processing software is developed using the
incremental model.

Advantages of incremental model


• This model is flexible because the cost of development is low and initial
product delivery is faster.
• It is easier to test and debug during the smaller iteration.
• The working software generates quickly and early during the software life
cycle.
• The customers can respond to its functionalities after every increment.

Disadvantages of the incremental model


• The cost of the final product may cross the cost estimated initially.
• This model requires a very clear and complete planning.
• The planning of design is required before the whole system is broken into
small increments.
• The demands of customer for the additional functionalities after every
increment causes problem during the system architecture.

3. RAD model

• RAD is a Rapid Application Development model.


• Using the RAD model, software product is developed in a short period of
time.
• The initial activity starts with the communication between customer and
developer.
• Planning depends upon the initial requirements and then the requirements
are divided into groups.
• Planning is more important to work together on different modules.

The RAD model consists of following phases:

1. Business Modeling
• Business modeling consist of the flow of information between various
functions in the project.
• For example, what type of information is produced by every function and
which are the functions to handle that information.
• A complete business analysis should be performed to get the essential
business information.
2. Data modeling
• The information in the business modeling phase is refined into the set of
objects and it is essential for the business.
• The attributes of each object are identified and define the relationship
between objects.
3. Process modeling
• The data objects defined in the data modeling phase are changed to fulfil
the information flow to implement the business model.
• The process description is created for adding, modifying, deleting or
retrieving a data object.
4. Application generation
• In the application generation phase, the actual system is built.
• To construct the software the automated tools are used.
5. Testing and turnover
• The prototypes are independently tested after each iteration so that the
overall testing time is reduced.
• The data flow and the interfaces between all the components are fully
tested. Hence, most of the programming components are already tested.

Agility and Process


Agility in context of software engineering

• Agility means effective (rapid and adaptive) response to change, effective


communication among all stockholder.

• Drawing the customer onto team and organizing a team so that it is in


control of work performed. -The Agile process, light-weight methods are
People-based rather than plan-based methods.
• The agile process forces the development team to focus on software itself
rather than design and documentation.
• The agile process believes in iterative method.

• The aim of agile process is to deliver the working model of software quickly
to the customer for example: Extreme programming is the best known of
agile process.

Agile Principles

• The highest priority of this process is to satisfy the customer.


• Acceptance of changing requirement even late in development.
• Frequently deliver a working software in small time span.
• Throughout the project business people and developers work together on
daily basis.
• Projects are created around motivated people if they are given the proper
environment and support.
• Face to face interaction is the most efficient method of moving information
in the development team.
• Primary measure of progress is a working software.
• Agile process helps in sustainable development.
• Continuous attention to technical excellence and good design increases
agility.
• From self-organizing teams the best architecture, design and requirements
are emerged.
• Simplicity is necessary in development.
Advantages:
1. Flexible and adaptable to changing requirements.
2. Emphasizes rapid prototyping and continuous delivery, which can help to
identify and fix problems early on.
3. Encourages collaboration and communication between development teams
and stakeholders.
Disadvantages:
1. It may be difficult to plan and manage a project using Agile methodologies,
as requirements and deliverables are not always well-defined in advance.
2. It can be difficult to estimate the overall time and cost of a project, as the
process is iterative and changes are made throughout the development.

Process in context of software engineering


The term software specifies to the set of computer programs, procedures and
associated documents (Flowcharts, manuals, etc.) that describe the program and
how they are to be used.
A software process is the set of activities and associated outcome that produce a
software product. Software engineers mostly carry out these activities. These are
four key process activities, which are common to all software processes. These
activities are:
1. Software specifications: The functionality of the software and
constraints on its operation must be defined.
2. Software development: The software to meet the requirement must be
produced.
3. Software validation: The software must be validated to ensure that it
does what the customer wants.
4. Software evolution: The software must evolve to meet changing client
needs.

Software processes in software engineering refer to the methods and


techniques used to develop and maintain software. Some examples of software
processes include:
• Waterfall: a linear, sequential approach to software development, with
distinct phases such as requirements gathering, design, implementation,
testing, and maintenance.
• Agile: a flexible, iterative approach to software development, with an
emphasis on rapid prototyping and continuous delivery.
• Scrum: a popular Agile methodology that emphasizes teamwork, iterative
development, and a flexible, adaptive approach to planning and
management.
• DevOps: a set of practices that aims to improve collaboration and
communication between development and operations teams, with an
emphasis on automating the software delivery process.
Each process has its own set of advantages and disadvantages, and the choice
of which one to use depends on the specific project and organization.

Scrum

One of the widely known and followed agile types, Scrum process usually is
followed in a small team. It majorly focuses on managing the process in a
development team. It uses a scrum board and divides the whole development
process into sprints. While the duration of the sprint may vary between 2 weeks
to 1 month, each of them is defined by analysis, development as well as user
acceptance.
Scrum also focuses on team interaction as well as effective client involvement.
There are different roles that people play in a scrum team, such as −
Scrum Master − The Scrum master’s responsibility is to set up the meeting,
team formation and overall managing of the whole sprint wise development. He/
she also takes care of any kind of hurdle faced during the development process.
Product owner − The product owner is responsible for creating and maintaining
the backlog, which is a repository or dashboard containing all the developmental
plans or requirement for the particular sprint. The product owner also assigns
and manages the requirements based on their priority order in each sprint.
Scrum team − The development team responsible for development or
completion of the tasks and successful execution of each of the sprints.
For each sprint, the product backlog is prepared by the product owner which
comprises of requirements or tasks and user stories for that particular sprint.
Once the tasks are selected for that sprint backlog, the development team works
on those tasks and executes them as a product sprint wise. The scrum master
manages the whole sprint process and takes care of the smooth execution of
plan.
Everyday there is a small meeting of around 15 mins, called the daily Scrum or
stand-up meeting, where the scrum master gets the status update regarding the
product backlog and tries to find out if there is any blockage for further action.
Some non-functional testing types include load testing, compatibility testing,
usability testing, scalability testing and so on.
Advantages:
1. Encourages teamwork and collaboration.
2. Provides a flexible and adaptive framework for planning and managing
software development projects.
3. Helps to identify and fix problems early on by using frequent testing and
inspection.
Disadvantages:
1. A lack of understanding of Scrum methodologies can lead to confusion and
inefficiency.
2. It can be difficult to estimate the overall time and cost of a project, as the
process is iterative and changes are made throughout the development.

XP

Extreme Programming or simply called as XP prioritizes customer satisfaction


over anything else. Developed by Kent Beck, XP requires higher level of
involvement from both Client as well as developers. Development cases where
there are frequent changes in customer requirement can opt for XP as it is flexible
regarding changes at any time of the production.
It focuses on short deliveries with checkpoints all over the project in order to
analyze if there are any changes in requirement and act accordingly. The small
deliveries and iterative development cycles happen in a more confusing manner
without any clear picture. However, it makes sense once the whole development
is completed. XP is largely dependent on its developer team’s ability to
coordinate. There are 5 phases in Extreme programming, which are −
Planning −
The initial phase where clients and the developers meet and discuss about the
requirement and scope of the development. Based on the client input, developer
team prepare short iterative user stories or development cycles for the full
development as a whole. Based on the stories created, duration and cost of the
project is defined.
Designing −
Here all the user stories are broken down into smaller tasks and further analysis
is done regarding the execution. Even development standards such as class and
method names, architecture and formats etc. are planned during designing. Test
cases are concurrently prepared for those iterative tasks. For probable problems
even contingency plans as well as solutions are discussed.
Coding −
The most important phase where the development based on the planning takes
place which include coding based on requirement and simultaneous
documentation for updating customer regarding the present status. Coding
standards and architectures defined during designing are properly followed with
a strictly 40 hours work a week follow up.
Testing −
Once the coding is done, user acceptance testing is started. XP integrates testing
during the coding phase itself for testing and development to run simultaneously.
Based on the test results, bugs are eliminated and then the product goes through
customer acceptance testing which is based on the customer requirements. Once
the testing is passed, the product along with the test result is then delivered to
the customer.
Closure −
Here, once the product is delivered, the team waits for customer as well as
manager feedback. Based on the feedback, they again follow the same planning-
coding-testing iteration till customer acceptance test is passed. The team also
provide technical support during production in case of further issue arises.

Kanban

The Kanban methodology is somehow similar to Scrum. Developed by Taiichi


Ohno, it is derived from a Japanese word meaning an instruction card which gets
followed during the production cycle of a product. It is more of a visual
methodology which is quite dependent on its Kanban Board similar to a Scrum
board, which contains the complete workflow schedule of the development
process.
Unlike Scrum, where only the sprint tasks are added to the board, all the tasks
related to the production schedule are added into the Kanban board in form of
cards. Typically, the tasks are divided into 3 columns such as Completed tasks,
To-do tasks, ongoing tasks. Unlike Scrum, task completion based on priority is
optional in Kanban and it can contain all the tasks related to the whole
developmental cycle at one place.

DevOps

The DevOps is the combination of two words, one is Development and other
is Operations. It is a culture to promote the development and operation process
collectively.

What is DevOps?
The DevOps is a combination of two words, one is software Development, and
second is Operations. This allows a single team to handle the entire application
lifecycle, from development to testing, deployment, and operations. DevOps
helps you to reduce the disconnection between software developers, quality
assurance (QA) engineers, and system administrators.
• Promotes collaboration between Development and Operations team to
deploy code to production faster in an automated & repeatable way.
• Helps to increase organization speed to deliver applications and services.
It also allows organizations to serve their customers better and compete
more strongly in the market.
• Defined as a sequence of development and IT operations with better
communication and collaboration.
• One of the most valuable business disciplines for enterprises or
organizations. With the help of DevOps, quality, and speed of the
application delivery has improved to a great extent.
• A practice or methodology of making "Developers" and "Operations"
folks work together. DevOps represents a change in the IT culture with a
complete focus on rapid IT service delivery through the adoption of agile
practices in the context of a system-oriented approach.
DevOps is all about the integration of the operations and development process.
Organizations that have adopted DevOps noticed a 22% improvement in software
quality and a 17% improvement in application deployment frequency and achieve
a 22% hike in customer satisfaction. 19% of revenue hikes as a result of the
successful DevOps implementation.

Why DevOps?
o The operation and development team worked in complete isolation.
o After the design-build, the testing and deployment are performed
respectively. That's why they consumed more time than actual build cycles.
o Without the use of DevOps, the team members are spending a large
amount of time on designing, testing, and deploying instead of building the
project.
o Manual code deployment leads to human errors in production.
o Coding and operation teams have their separate timelines and are not in
synch, causing further delays.

DevOps Architecture Features


1) Automation
Automation can reduce time consumption, especially during the testing and
deployment phase. The productivity increases, and releases are made quicker by
automation. This will lead in catching bugs quickly so that it can be fixed easily.
For contiguous delivery, each code is defined through automated tests, cloud-
based services, and builds. This promotes production using automated deploys.
2) Collaboration
The Development and Operations team collaborates as a DevOps team, which
improves the cultural model as the teams become more productive with their
productivity, which strengthens accountability and ownership. The teams share
their responsibilities and work closely in sync, which in turn makes the
deployment to production faster.
3) Integration
Applications need to be integrated with other components in the environment.
The integration phase is where the existing code is combined with new
functionality and then tested. Continuous integration and testing enable
continuous development. The frequency in the releases and micro-services leads
to significant operational challenges. To overcome such problems, continuous
integration and delivery are implemented to deliver in a quicker, safer,
and reliable manner.
4) Configuration management
It ensures the application to interact with only those resources that are concerned
with the environment in which it runs. The configuration files are not created
where the external configuration to the application is separated from the source
code. The configuration file can be written during deployment, or they can be
loaded at the run time, depending on the environment in which it is running.
Advantages:
1. Improves collaboration and communication between development and
operations teams.
2. Automates software delivery process, making it faster and more efficient.
3. Enables faster recovery and response time in case of issues.
Disadvantages:
1. Requires a significant investment in tools and technologies.
2. Can be difficult to implement in organizations with existing silos and lack
of culture of collaboration.
3. Need to have a skilled workforce to effectively implement the DevOps
practices.
4. Ultimately, the choice of which methodology to use depends on the specific
project and organization, as well as the goals and requirements of the
project.

Prototype Construction

The Software Prototyping refers to building software application prototypes


which displays the functionality of the product under development, but may not
actually hold the exact logic of the original software.
Software prototyping is becoming very popular as a software development
model, as it enables to understand customer requirements at an early stage of
development. It helps get valuable feedback from the customer and helps
software designers and developers understand about what exactly is expected
from the product under development.

What is Software Prototyping?


Prototype is a working model of software with some limited functionality. The
prototype does not always hold the exact logic used in the actual software
application and is an extra effort to be considered under effort estimation.
Prototyping is used to allow the users evaluate developer proposals and try them
out before implementation. It also helps understand the requirements which are
user specific and may not have been considered by the developer during product
design.

Following is a stepwise approach explained to design a software prototype.


Basic Requirement Identification
This step involves understanding the very basics product requirements especially
in terms of user interface. The more intricate details of the internal design and
external aspects like performance and security can be ignored at this stage.

Developing the initial Prototype


The initial Prototype is developed in this stage, where the very basic requirements
are showcased and user interfaces are provided. These features may not exactly
work in the same manner internally in the actual software developed. While, the
workarounds are used to give the same look and feel to the customer in the
prototype developed.

Review of the Prototype


The prototype developed is then presented to the customer and the other
important stakeholders in the project. The feedback is collected in an organized
manner and used for further enhancements in the product under development.

Revise and Enhance the Prototype


The feedback and the review comments are discussed during this stage and some
negotiations happen with the customer based on factors like – time and budget
constraints and technical feasibility of the actual implementation. The changes
accepted are again incorporated in the new Prototype developed and the cycle
repeats until the customer expectations are met.

Prototypes can have horizontal or vertical dimensions. A Horizontal prototype


displays the user interface for the product and gives a broader view of the entire
system, without concentrating on internal functions. A Vertical prototype on the
other side is a detailed elaboration of a specific function or a sub system in the
product. The purpose of both horizontal and vertical prototype is different.
Horizontal prototypes are used to get more information on the user interface level
and the business requirements. It can even be presented in the sales demos to
get business in the market. Vertical prototypes are technical in nature and are
used to get details of the exact functioning of the sub systems. For example,
database requirements, interaction and data processing load in a given sub
system.

Types of Prototyping Models


Four types of Prototyping models are:
1. Rapid Throwaway prototypes
2. Evolutionary prototype
3. Incremental prototype
4. Extreme prototype
Rapid Throwaway Prototype
Rapid throwaway is based on the preliminary requirement. It is quickly developed
to show how the requirement will look visually. The customer’s feedback helps
drives changes to the requirement, and the prototype is again created until the
requirement is baselined.
In this method, a developed prototype will be discarded and will not be a part of
the ultimately accepted prototype. This technique is useful for exploring ideas
and getting instant feedback for customer requirements.
Evolutionary Prototyping
Here, the prototype developed is incrementally refined based on customer’s
feedback until it is finally accepted. It helps you to save time as well as effort.
That’s because developing a prototype from scratch for every interaction of the
process can sometimes be very frustrating.
This model is helpful for a project which uses a new technology that is not well
understood. It is also used for a complex project where every functionality must
be checked once. It is helpful when the requirement is not stable or not
understood clearly at the initial stage.
Incremental Prototyping
In incremental Prototyping, the final product is decimated into different small
prototypes and developed individually. Eventually, the different prototypes are
merged into a single product. This method is helpful to reduce the feedback time
between the user and the application development team.
Extreme Prototyping:
Extreme prototyping method is mostly used for web development. It is consists
of three sequential phases.
1. Basic prototype with all the existing page is present in the HTML format.
2. You can simulate data process using a prototype services layer.
3. The services are implemented and integrated into the final prototype.

The advantages of the Prototyping Model are as follows −


• Increased user involvement in the product even before its implementation.
• Since a working model of the system is displayed, the users get a better
understanding of the system being developed.
• Reduces time and cost as the defects can be detected much earlier.
• Quicker user feedback is available leading to better solutions.
• Missing functionality can be identified easily.
• Confusing or difficult functions can be identified.
The Disadvantages of the Prototyping Model are as follows −
• Risk of insufficient requirement analysis owing to too much dependency on
the prototype.
• Users may get confused in the prototypes and actual systems.
• Practically, this methodology may increase the complexity of the system
as scope of the system may expand beyond original plans.
• Developers may try to reuse the existing prototypes to build the actual
system, even when it is not technically feasible.
• The effort invested in building prototypes may be too much if it is not
monitored properly.

Prototype Evaluation

A prototype is the first step on your way to a successful digital product. You’ve
already spent a lot of time and money on your idea, so you shouldn’t risk
developing a product that doesn’t match your users’ needs or preferences, or
that your users won’t understand. That’s why you should invest in usability
testing and UX testing.
In a Prototype Evaluation, you get the chance to ask your target users for initial
feedback on design, usability, and user experience. In this phase, the prototype
is tested to see if it meets the requirements and specifications. This is done by
evaluating its functionality, performance, and reliability.
During the user testing process, the testers will check your prototype and make
sure you don’t work further on a product with usability problems or that’s not
suited to your users’ needs. Based on the feedback of your target users you can
then adapt your first draft and move on with development.
Advantages of Prototype Evaluation
• Make sure your future product is suited to your users’ needs and
expectations
• Prevent operational blindness by involving your target group in
development
• Get feedback on design, usability, and UX at an early stage
• Only invest time and money in features your users will use

Prototype Evolution

The prototype developed is then presented to the customer and the other
important stakeholders in the project. The feedback is collected in an organized
manner and used for further enhancements in the product under development.

The feedback and the review comments are discussed during this stage and some
negotiations happen with the customer based on factors like – time and budget
constraints and technical feasibility of the actual implementation. The changes
accepted are again incorporated in the new Prototype developed and the cycle
repeats until the customer expectations are met.

Based on the results of the testing and evaluation phase, the prototype is refined
and improvements are made. After the refinement phase, the final prototype is
created. This prototype is then tested and evaluated to ensure that it meets all
the requirements before it is mass-produced.

Evolutionary Prototype
In an evolutionary prototype, a system or product is built via several iterations,
with each iteration building on the one before it to enhance and improve the
design. The objective is to provide a final design that satisfies the expectations
and requirements of the intended audience.
This method can be used for a wide variety of projects in various industries, but
it is frequently utilized in software development and engineering.

Modeling

Modeling is a central part of all activities that lead up to the deployment of good
software. It is required to build quality software. Modeling is widely used in
science and engineering to provide abstractions of a system at some level of
precision and detail. The model is then analyzed in order to obtain a better
understanding of the system being developed. “Modeling” is the designing of
software applications before coding.”
In model-based software design and development, software modeling is used as
an essential part of the software development process. Models are built and
analyzed prior to the implementation of the system and are used to direct the
subsequent implementation.
A better understanding of a system can be obtained by considering it from
different perspectives, such as requirements models, static models, and dynamic
models of the software system. A graphical modeling language such as UML helps
in developing, understanding, and communicating the different views.
Importance of Modeling:
• Modeling gives a graphical representation of the system to be built.
• Modeling contributes to a successful software organization.
• Modeling is a proven and well-accepted engineering technique.
• Modeling is not just a part of the building industry. It would be
inconceivable to deploy a new aircraft or an automobile without first
building models-from computer models to physical wind tunnel models to
full scale prototypes.
• A model is a simplification of reality. A model provides the blueprint of a
system.
• A model may be structural, emphasizing the organization of the system,
or it may be behavioral, emphasizing the dynamics of the system.
• Models are built for a better understanding of the system that we are
developing:
a. Models help us to visualize a system as it is or as we want it to be.
b. Models permit us to specify the structure or behavior of a system.
c. Models give us a template that guides us in constructing a system.
d. Models support the decisions we have made.

The Software Development Life Cycle, or SDLC, is the process of planning,


designing, developing, testing, and deploying high-quality software at the lowest
cost possible, preferably in the shortest amount of time. To achieve this goal,
software engineering teams must choose the correct software development
model to fit their organization’s requirements, stakeholders’ expectations, and
the project. There are myriad software development models, each with distinct
advantages and disadvantages.
Details of the project, including timeframe and budget, should influence your
choice of model. The goal is to select a software development model that will
ensure project success. Selecting the incorrect model will result in drawn-out
timelines, exceeded budgets, low-quality outputs, and even project failure.

Principles
There are several basic principles of a good software engineering approach that
are commonly followed by software developers and engineers to produce high-
quality software. Some of these principles include:
1. Modularity: Breaking down the software into smaller, independent, and
reusable components or modules. This makes the software easier to
understand, test, and maintain.
2. Abstraction: Hiding the implementation details of a module or component
and exposing only the necessary information. This makes the software
more flexible and easier to change.
3. Encapsulation: Wrapping the data and functions of a module or
component into a single unit, and providing controlled access to that unit.
This helps to protect the data and functions from unauthorized access and
modification.
4. DRY principle (Don’t Repeat Yourself): Avoiding duplication of code
and data in the software. This makes the software more maintainable and
less error-prone.
5. KISS principle (Keep It Simple, Stupid): Keeping the software design
and implementation as simple as possible. This makes the software more
understandable, testable, and maintainable.
6. YAGNI (You Ain’t Gonna Need It): Avoiding adding unnecessary
features or functionality to the software. This helps to keep the software
focused on the essential requirements and makes it more maintainable.
7. SOLID principles: A set of principles that guide the design of software to
make it more maintainable, reusable, and extensible. This includes the
Single Responsibility Principle, Open/Closed Principle, Liskov Substitution
Principle, Interface Segregation Principle, and Dependency Inversion
Principle.
8. Test-driven development: Writing automated tests before writing the
code, and ensuring that the code passes all tests before it is considered
complete. This helps to ensure that the software meets the requirements
and specifications.
9. By following these principles, software engineers can develop software that
is more reliable, maintainable, and extensible.
10. It’s also important to note that these principles are not mutually
exclusive, and often work together to improve the overall quality of the
software.

Requirement Engineering
Requirements engineering (RE) refers to the process of defining, documenting,
and maintaining requirements in the engineering design process. Requirement
engineering provides the appropriate mechanism to understand what the
customer desires, analyzing the need, and assessing feasibility, negotiating a
reasonable solution, specifying the solution clearly, validating the specifications
and managing the requirements as they are transformed into a working system.
Thus, requirement engineering is the disciplined application of proven principles,
methods, tools, and notation to describe a proposed system's intended behavior
and its associated constraints.

Tools involved in requirement engineering:


• observation report
• Questionnaire (survey, poll)
• Use cases
• User stories
• Requirement workshop
• Mind mapping
• Role playing
• Prototyping

Requirement Engineering Process


It is a four-step process, which includes -
1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management
1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for developing
the software that is acceptable to users, flexible to change and conformable to
established standards.
Types of Feasibility:
1. Technical Feasibility - Technical feasibility evaluates the current
technologies, which are needed to accomplish customer requirements
within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in
which the required software performs a series of levels to solve business
problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the
necessary software can generate financial profits for an organization.

2. Requirement Elicitation and Analysis:


This is also known as the gathering of requirements. Here, requirements are
identified with the help of customers and existing systems processes, if available.
Analysis of requirements starts with requirement elicitation. The requirements
are analyzed to identify inconsistencies, defects, omission, etc. We describe
requirements in terms of relationships and also resolve conflicts if any.
Problems of Elicitation and Analysis
o Getting all, and only, the right people involved.
o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system requirements.

3. Software Requirement Specification:


Software requirement specification is a kind of document which is created by a
software analyst after the requirements collected from the various sources - the
requirement received by the customer written in ordinary language. It is the job
of the analyst to write the requirement in technical language so that they can be
understood and beneficial by the development team.
The models used at this stage include ER diagrams, data flow diagrams (DFDs),
function decomposition diagrams (FDDs), data dictionaries, etc.
o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for
modeling the requirements. DFD shows the flow of data through a system.
The system may be a company, an organization, a set of procedures, a
computer hardware system, a software system, or any combination of the
preceding. The DFD is also known as a data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store
information about all data items defined in DFDs. At the requirements
stage, the data dictionary should at least define customer data items, to
ensure that the customer and developers use the same definition and
terminologies.
o Entity-Relationship Diagrams: Another tool for requirement
specification is the entity-relationship diagram, often called an "E-R
diagram." It is a detailed logical representation of the data for the
organization and uses three main constructs i.e., data entities,
relationships, and their associated attributes.

4. Software Requirement Validation:


After requirement specifications developed, the requirements discussed in this
document are validated. The user might demand illegal, impossible solution or
experts may misinterpret the needs. Requirements can be the check against the
following conditions -
o If they can practically implement
o If they are correct and as per the functionality and specially of software
o If there are any ambiguities
o If they are full
o If they can describe
Requirements Validation Techniques
o Requirements reviews/inspections: systematic manual analysis of the
requirements.
o Prototyping: Using an executable model of the system to check
requirements.
o Test-case generation: Developing tests for requirements to check
testability.
o Automated consistency analysis: checking for the consistency of
structured requirements descriptions.

Software Requirement Management:


Requirement management is the process of managing changing requirements
during the requirements engineering process and system development.
New requirements emerge during the process as business needs a change, and
a better understanding of the system is developed.
The priority of requirements from different viewpoints changes during
development process.
The business and technical environment of the system changes during the
development.

Prerequisite of Software requirements


Collection of software requirements is the basis of the entire software
development project. Hence, they should be clear, correct, and well-defined.
A complete Software Requirement Specifications should be:
o Clear
o Correct
o Consistent
o Coherent
o Comprehensible
o Modifiable
o Verifiable
o Prioritized
o Unambiguous
o Traceable
o Credible source
Software Requirements: Largely software requirements must be categorized
into two categories:
1. Functional Requirements: Functional requirements define a function
that a system or system element must be qualified to perform and must
be documented in different forms. The functional requirements are
describing the behavior of the system as it correlates to the system's
functionality.
2. Non-functional Requirements: This can be the necessities that specify
the criteria that can be used to decide the operation instead of specific
behaviors of the system.
Non-functional requirements are divided into two main categories:
o Execution qualities like security and usability, which are
observable at run time.
o Evolution qualities like testability, maintainability, extensibility,
and scalability that embodied in the static structure of the software
system.

Scenario-based Modelling
Requirements for a computer-based system can be seen in many different ways.
Some software people argue that it’s worth using a number of different modes of
representation while others believe that it’s best to select one mode of
representation. The specific elements of the requirements model are
dedicated to the analysis modeling method that is to be used.

Scenario-Based Elements
Using a scenario-based approach, system is described from user’s point of
view. For example, basic use cases and their corresponding use-case diagrams
evolve into more elaborate template-based use cases. Figure 1(a) depicts a UML
activity diagram for eliciting requirements and representing them using use
cases. There are three levels of elaboration.
Class-Based Elements
A collection of things that have similar attributes and common behaviors i.e.,
objects are categorized into classes. For example, a UML case diagram can be
used to depict a Sensor class for the SafeHome security function. Note that
diagram lists attributes of sensors and operations that can be applied to modify
these attributes. In addition to class diagrams, other analysis modeling elements
depict manner in which classes collaborate with one another and relationships
and interactions between classes.

Class-based Modelling
Class-based modeling identifies classes, attributes and relationships that the
system will use. In the airline application example, the traveler/user and the
boarding pass represent classes. The traveler's first and last name, and travel
document type represent attributes, characteristics that describe the traveler
class. The relationship between traveler and boarding pass classes is that the
traveler must enter these details into the application in order to get the boarding
pass, and that the boarding pass contains this information along with other
details like the flight departure gate, seat number etc.
Class based modeling represents the object. The system manipulates the
operations.
The elements of the class-based model consist of classes and object, attributes,
operations, class – responsibility - collaborator (CRS) models.
Classes
Classes are determined using underlining each noun or noun clause and enter it
into the simple table.
Classes are found in following forms:
External entities: The system, people or the device generates the information
that is used by the computer-based system.
Things: The reports, displays, letter, signal are the part of the information
domain or the problem.
Occurrences or events: A property transfer or the completion of a series or
robot movements occurs in the context of the system operation.
Roles: The people like manager, engineer, salesperson are interacting with the
system.
Organizational units: The division, group, team are suitable for an application.
Places: The manufacturing floor or loading dock from the context of the problem
and the overall function of the system.
Structures: The sensors, computers are defined a class of objects or related
classes of objects.
Attributes
Attributes are the set of data objects that are defining a complete class within
the context of the problem.
For example, 'employee' is a class and it consists of name, Id, department,
designation and salary of the employee are the attributes.
Operations
The operations define the behavior of an object.
The operations are characterized into following types:
• The operations manipulate the data like adding, modifying, deleting and
displaying etc.
• The operations perform a computation.
• The operation monitors the objects for the occurrence of controlling an
event
CRS Modeling
• The CRS stands for Class-Responsibility-Collaborator.
• It provides a simple method for identifying and organizing the classes that
are applicable to the system or product requirement.
• Class is an object-oriented class name. It consists of information about sub
classes and super class
• Responsibilities are the attributes and operations that are related to the
class.
• Collaborations are identified and determined when a class can achieve each
responsibility of it. If the class cannot identify itself, then it needs to
interact with another class.

Functional Modelling
Functional Modelling gives the process perspective of the object-oriented analysis
model and an overview of what the system is supposed to do. It defines the
function of the internal processes in the system with the aid of Data Flow
Diagrams (DFDs). It depicts the functional derivation of the data values without
indicating how they are derived when they are computed, or why they need to
be computed.
Data Flow Diagrams
Functional Modelling is represented through a hierarchy of DFDs. The DFD is a
graphical representation of a system that shows the inputs to the system, the
processing upon the inputs, the outputs of the system as well as the internal data
stores. DFDs illustrate the series of transformations or computations performed
on the objects or the system, and the external controls and objects that affect
the transformation.
Rumbaugh et al. have defined DFD as, “A data flow diagram is a graph which
shows the flow of data values from their sources in objects through processes
that transform them to their destinations on other objects.”
The four main parts of a DFD are −
• Processes - Processes are the computational activities that transform data
values. A whole system can be visualized as a high-level process. A process
may be further divided into smaller components. The lowest-level process
may be a simple function.
• Data Flows - Data flow represents the flow of data between two processes.
It could be between an actor and a process, or between a data store and
a process. A data flow denotes the value of a data item at some point of
the computation. This value is not changed by the data flow.
• Actors - Actors are the active objects that interact with the system by either
producing data and inputting them to the system, or consuming data
produced by the system. In other words, actors serve as the sources and
the sinks of data.
• Data Stores - Data stores are the passive objects that act as a repository
of data. Unlike actors, they cannot perform any operations. They are used
to store data and retrieve the stored data. They represent a data structure,
a disk file, or a table in a database.
The other parts of a DFD are −
• Constraints - Constraints specify the conditions or restrictions that need to
be satisfied over time. They allow adding new rules or modifying existing
ones.
• Control Flows – A process may be associated with a certain Boolean value
and is evaluated only if the value is true, though it is not a direct input to
the process. These Boolean values are called the control flows.

Behavioural Modelling
Behavioural Modelling indicates how software will respond to external events or
stimuli. In behavioral model, the behavior of the system is represented as a
function of specific events and time.
To create behavioral model following things can be considered:
• Evaluation of all use-cases to fully understand the sequence of interaction
within the system.
• Identification of events that drive the interaction sequence and understand
how these events relate to specific classes.
• Creating sequence for each use case.
• Building state diagram for the system.
• Reviewing the behavioral model to verify accuracy and consistency.
It describes interactions between objects. It shows how individual objects
collaborate to achieve the behavior of the system as a whole. In UML behavior of
a system is shown with the help of use case diagram, sequence diagram and
activity diagram
• A use case focuses on the functionality of a system i.e., what a system
does for users. It shows interaction between the system and outside
actors. Ex: Student, librarians are actors, issue book use case.
• A sequence diagram shows the objects that interact and the time sequence
of their interactions. Ex: Student, librarians are objects. Time sequence
enquires for book check availability –with respect time.
• An activity diagram specifies important processing steps. It shows
operations required for processing steps. It shows operations required for
processing. Ex issue book, check availability does not show objects
UNIT II SOFTWARE DESIGN

Design Concepts

What is Software Design?


Software design is a method that converts user requirements into a suitable form
for the programmer to employ in software coding and implementation. It is
concerned with converting the client's requirements as defined in the SRS
(Software Requirement Specification) document into a form that can be easily
implemented using a programming language. A good software designer needs to
have knowledge of what software engineering is.

The software design phase is the first step in the SDLC (Software Development
Life Cycle) that shifts the focus from the problem domain to the solution domain.
In software design, the system is viewed as a collection of components or
modules with clearly defined behaviors and bounds.

Objectives of Software Design


The following objectives describe what is software design in software
engineering.
• Correctness: A good design should be correct, which means that it should
correctly implement all of the system's features.
• Efficiency: A good software design should consider resource, time, and
cost optimization parameters.
• Understandability: A good design should be easy to grasp, which is why
it should be modular, with all parts organized in layers.
• Completeness: The design should include all components, such as data
structures, modules, and external interfaces, among others.
• Maintainability: A good software design should be flexible when the client
issues a modification request.

Levels of Software Design

There are three levels of software design.


Architectural Design
A system's architecture can be defined as the system's overall structure and how
that structure offers conceptual integrity to the system. The architectural
design characterizes the software as a system with numerous interconnected
components. The designers acquire an overview of the proposed solution domain
at this level.
High-level Design
The high-level design deconstructs the architectural design's 'single entity-
multiple component' notion into a less abstract perspective of subsystems and
modules, depicting their interaction with one another. High-level design is
concerned with how the system and its components can be implemented as
modules. It recognizes the modular structure of each subsystem, as well as their
relationship and interaction with one another.
Detailed Design
After the high-level design is completed, the detailed design begins. Each module
is extensively investigated at this level of software design to establish the data
structures and algorithms to be used. Finally, a module specification
document is used to document the stage's outcome. It defines the logical
structure of each module as well as its interfaces with other modules.

Software Design Concepts


Let us look at some software design concepts that assist a software engineer in
creating the model of the system or software product to be developed or built.
The following ideas should be grasped before designing a software system.
Abstraction
One of the fundamental concepts of object-oriented programming
(OOP) languages is an abstraction. Its primary purpose is to deal with complexity
by concealing internal details from the user. This allows the user to build more
complicated logic on top of the offered abstraction without having to understand
or even consider all the hidden complexity.
Modularity
Modularity refers to breaking a system or project into smaller sections to lessen
the system's or project's complexity. Similarly, modularity in design refers to the
division of a system into smaller elements that can be built independently and
then used in multiple systems to execute different purposes. Sometimes to deal
with Monolithic software, which is difficult to grasp for software engineers, it is
required to partition the software into components known as modules. As a
result, modularity in design has become a trend that is also essential.
Architecture
A system's software architecture represents the design decisions linked to the
general structure and behavior of the system. Architecture assists stakeholders
in comprehending and analyzing how the system will attain critical characteristics
such as modifiability, availability, and security. It specifies how components
of a software system are constructed, as well as their relationships and
communication. It acts as a software application blueprint and a development
foundation for the developer team.
Refinement
Refinement means removing any impurities and improving the quality of
something. The software design refinement idea is a process of building or
presenting the software or system in a detailed manner, which implies
elaborating on a system or software. In addition, refinement is essential for
identifying and correcting any possible errors.
Design Patterns
A Software Design Pattern is a general, reusable solution to a commonly
occurring problem within a given context in software design. They are templates
to solve common software engineering problems, representing some of the finest
practices experienced object-oriented software engineers utilize. A design
pattern systematically describes and explains a general design that handles a
recurring design challenge in object-oriented systems. It discusses the problem,
the remedy, when to use it, and the repercussions. It also provides
implementation guidance and examples.
Information/Data Hiding
Simply put, information hiding implies concealing information so that an
unauthorized entity cannot access it. In software design, information hiding is
accomplished by creating modules in such a way that information acquired or
contained in one module is concealed and cannot be accessible by other modules.
Refactoring
Refactoring is the process of reorganizing code without affecting its original
functionality. Refactoring aims to improve internal code by making modest
changes that do not affect the code's exterior behavior. Computer programmers
and software developers refactor code to improve the software's design,
structure, and implementation. As a result, Refactoring increases code readability
while decreasing complications. Refactoring can also assist software engineers in
locating faults or vulnerabilities in their code.

Design Model
A design model in software engineering is an object-based picture or pictures that
represent the use cases for a system. Or to put it another way, it's the means to
describe a system's implementation and source code in a diagrammatic
fashion. Software modeling should address the entire software design including
interfaces, interactions with other software, and all the software methods.
Software models are ways of expressing a software design. Usually, some sort of
abstract language or pictures are used to express the software design. For object-
oriented software, an object modeling language such as UML is used to develop
and express the software design. There are several tools that you can use to
develop your UML design.

In almost all cases a modeling language is used to develop the design not just to
capture the design after it is complete. This allows the designer to try different
designs and decide which will be best for the final solution. Think of designing
your software as you would a house. You start by drawing a rough sketch of the
floor plan and layout of the rooms and floors. The drawing is your modeling
language and the resulting blueprint will be a model of your final design. You will
continue to modify your drawings until you arrive at a design that meets all your
requirements. Only then should you start cutting boards or writing code.
Again, the benefit of designing your software using a modeling language is that
you discover problems early and fix them without refactoring your code.

The design model builds on the analysis model by describing, in greater detail,
the structure of the system and how the system will be implemented. Classes
that were identified in the analysis model are refined to include the
implementation constructs.

The design model is based on the analysis and architectural requirements of the
system. It represents the application components and determines their
appropriate placement and use within the overall architecture.

In the design model, packages contain the design elements of the system, such
as design classes, interfaces, and design subsystems, that evolve from the
analysis classes. Each package can contain any number of subpackages that
further partition the contained design elements. These architectural layers form
the basis for a second-level organization of the elements that describe the
specifications and implementation details of the system.

Within each package, sequence diagrams illustrate how the objects in the classes
interact, state machine diagrams to model the dynamic behavior in classes,
component diagrams to describe the software architecture of the system, and
deployment diagrams to describe the physical architecture of the system.

Software Architecture
Software Architecture defines fundamental organization of a system and more
simply defines a structured solution. It defines how components of a software
system are assembled, their relationship and communication between them. It
serves as a blueprint for software application and development basis for
developer team.
Software architecture defines a list of things which results in making many things
easier in the software development process.
• A software architecture defines structure of a system.
• A software architecture defines behavior of a system.

• A software architecture defines component relationship.

• A software architecture defines communication structure.


• A software architecture balances stakeholder’s needs.

• A software architecture influences team structure.

• A software architecture focuses on significant elements.

• A software architecture captures early design decisions.

Importance of Software Architecture:


Software architecture comes under design phase of software development life
cycle. It is one of initial step of whole software development process. Without
software architecture proceeding to software development is like building a house
without designing architecture of house.

So, software architecture is one of important part of software application


development. In technical and developmental aspects point of view below are
reasons software architecture are important.

• Selects quality attributes to be optimized for a system.


• Facilitates early prototyping.

• Allows to be built a system in component wise.

• Helps in managing the changes in System.

Besides all these software architectures is also important for many other factors
like quality of software, reliability of software, maintainability of software,
Supportability of software and performance of software and so on.

Advantages of Software Architecture:


• Provides a solid foundation for software project.

• Helps in providing increased performance.

• Reduces development cost.


Disadvantages of Software Architecture:
• Sometimes getting good tools and standardization becomes a problem for
software architecture.

• Initial prediction of success of project based on architecture is not always


possible.

Architectural Styles
The software needs the architectural design to represent the design of the
software. IEEE defines architectural design as “the process of defining a collection
of hardware and software components and their interfaces to establish the
framework for the development of a computer system.” The software that is built
for computer-based systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of:

• A set of components (eg: a database, computational modules) that will


perform a function required by the system.

• The set of connectors will help in coordination, communication, and


cooperation between the components.

• Conditions that how components can be integrated to form the system.

• Semantic models that help the designer to understand the overall


properties of the system.

The use of architectural styles is to establish a structure for all the components
of the system.

An architectural style is a set of principles. You can think of it as a coarse-grained


pattern that provides an abstract framework for a family of systems. An
architectural style improves partitioning and promotes design reuse by providing
solutions to frequently recurring problems.

• Named collection. An architectural style is a named collection of


architectural design decisions that are applicable in a given development
context, constrain architectural design decisions that are specific to a
particular system within that context, elicit beneficial qualities in each
resulting system.

• Recurring organizational patterns and idioms. Established, shared


understanding of common design forms. Mark of mature engineering field.
• Abstraction. Abstraction of recurring composition and interaction
characteristics in a set of architectures.

Benefits of Architectural Styles in Software Engineering

Architectural styles provide several benefits. The most important of these benefits
is that they provide a common language.

Another benefit is that they provide a way to have a conversation that is


technology-agnostic.

This allows you to facilitate a higher level of conversation that is inclusive of


patterns and principles, without getting into the specifics. For example, by using
architecture styles, you can talk about client-server versus N-Tier.

• Design Reuse. Well-understood solutions applied to new problems.


• Code reuse. Shared implementations of invariant aspects of a style.
• Understandability of system organization. A phrase such as ‘client-
server” conveys a lot of information.
• Interoperability. Supported by style standardization.
• Style-specific analysis. Enabled by the constrained design space.
• Visualizations. Style-specific descriptions matching the engineer’s
mental models.

Architectural Design

The requirements of the software should be transformed into an architecture that


describes the software’s top-level structure and identifies its components. This is
accomplished through architectural design (also called system design), which
acts as a preliminary ‘blueprint’ from which software can be
developed. IEEE defines architectural design as ‘the process of defining a
collection of hardware and software components and their interfaces to establish
the framework for the development of a computer system.’ This framework is
established by examining the software requirements document and designing a
model for providing implementation details. These details are used to specify the
components of the system along with their inputs, outputs, functions, and the
interaction between them. An architectural design performs the following
functions.

1. It defines an abstraction level at which the designers can specify the functional
and performance behavior of the system.

2. It acts as a guideline for enhancing the system (whenever required) by


describing those features of the system that can be modified easily without
affecting the system’s integrity.
3. It evaluates all top-level designs.

4. It develops and documents top-level design for the external and internal
interfaces.

5. It develops preliminary versions of user documentation.

6. It defines and documents preliminary test requirements and the schedule for
software integration.

7. The sources of architectural design are listed below.

8. Information regarding the application domain for the software to be developed

9. Using data-flow diagrams

10. Availability of architectural patterns and architectural styles.

Architectural design is of crucial importance in software engineering during which


the essential requirements like reliability, cost, and performance are dealt with.
This task is cumbersome as the software engineering paradigm is shifting from
monolithic, stand-alone, built-from-scratch systems to componentized,
evolvable, standards-based, and product line-oriented systems. Also, a key
challenge for designers is to know precisely how to proceed from requirements to
architectural design. To avoid these problems, designers adopt strategies such as
reusability, componentization, platform-based, standards-based, and so on.

Though the architectural design is the responsibility of developers, some other


people like user representatives, systems engineers, hardware engineers, and
operations personnel are also involved. All these stakeholders must also be
consulted while reviewing the architectural design in order to minimize the risks
and errors.

Architectural Design Representation

Architectural design can be represented using the following models.

▪ Structural model: Illustrates architecture as an ordered collection of


program components
▪ Dynamic model: Specifies the behavioral aspect of the software
architecture and indicates how the structure or system configuration
changes as the function changes due to change in the external environment
▪ Process model: Focuses on the design of the business or technical
process, which must be implemented in the system
▪ Functional model: Represents the functional hierarchy of a system
▪ Framework model: Attempts to identify repeatable architectural design
patterns encountered in similar types of application. This leads to an
increase in the level of abstraction.

Architectural Design Output

The architectural design process results in an Architectural Design Document


(ADD). This document consists of a number of graphical representations that
comprises software models along with the associated descriptive text. The
software models include the static model, interface model, relationship model,
and dynamic process model. They show how the system is organized into a
process at run-time.

Architectural design document gives the developers a solution to the problem


stated in the Software Requirements Specification (SRS). Note that it considers
only those requirements in detail that affect the program structure. In addition,
to ADD, other outputs of the architectural design are listed below.

▪ Various reports including audit report, progress report, and configuration


status accounts report
▪ Various plans for detailed design phase, which include the following
▪ Software verification and validation plan
▪ Software configuration management plan
▪ Software quality assurance plan
▪ Software project management plan.

Component-Level Design

As soon as the first iteration of architectural design is complete, component-


level design takes place. The objective of this design is to transform the design
model into functional software. To achieve this objective, the component-level
design represents -the internal data structures and processing details of all the
software components (defined during architectural design) at an abstraction
level, closer to the actual code. In addition, it specifies an interface that may be
used to access the functionality of all the software components.

The component-level design can be represented by using different approaches.


One approach is to use a programming language while other is to use some
intermediate design notation such as graphical (DFD, flowchart, or structure
chart), tabular (decision table), or text-based (program design language)
whichever is easier to be translated into source code.
The component-level design provides a way to determine whether the defined
algorithms, data structures, and interfaces will work properly. Note that a
component (also known as module) can be defined as a modular building block
for the software. However, the meaning of component differs according to how
software engineers use it. The modular design of the software should exhibit the
following sets of properties.

▪ Provide simple interface: Simple interfaces decrease the number of


interactions. Note that the number of interactions is taken into account
while determining whether the software performs the desired function.
Simple interfaces also provide support for reusability of components which
reduces the cost to a greater extent. It not only decreases the time involved
in design, coding, and testing but the overall software development cost is
also liquidated gradually with several projects. A number of studies so far
have proven that the reusability of software design is the most valuable
way of reducing the cost involved in software development.

▪ Ensure information hiding: The benefits of modularity cannot be


achieved merely by decomposing a program into several modules; rather
each module should be designed and developed in such a way that
the information hiding is ensured. It implies that the implementation
details of one module should not be visible to other modules of the
program. The concept of information hiding helps in reducing the cost of
subsequent design changes.

Modularity has become an accepted approach in every engineering discipline.


With the introduction of modular design, complexity of software design has
considerably reduced; change in the program is facilitated that has encouraged
parallel development of systems. To achieve effective modularity, design
concepts like functional independence are considered to be very important.

Functional Independence

Functional independence is the refined form of the design concepts of modularity,


abstraction, and information hiding. Functional independence is achieved by
developing a module in such a way that it uniquely performs given sets of function
without interacting with other parts of the system. The software that uses the
property of functional independence is easier to develop because its functions can
be categorized in a systematic manner. Moreover, independent modules require
less maintenance and testing activity, as secondary effects caused by design
modification are limited with less propagation of errors. In short, it can be said
that functional independence is the key to a good software design and a good
design results in high-quality software. There exist two qualitative criteria for
measuring functional independence, namely, coupling and cohesion.

Coupling
Coupling measures the degree of interdependence among the modules. Several
factors like interface complexity, type of data that pass across the interface, type
of communication, number of interfaces per module, etc. influence the strength
of coupling between two modules. For better interface and well-structured
system, the modules should be loosely coupled in order to minimize the ‘ripple
effect’ in which modifications in one module results in errors in other modules.
Module coupling is categorized into the following types.

1. No direct coupling: Two modules are said to be ‘no direct coupled’ if they
are independent of each other.
2. Data coupling: Two modules are said to be ‘data coupled’ if they use
parameter list to pass data items for communication.
3. Stamp coupling: Two modules are said to be ‘stamp coupled’ if they
communicate by passing a data structure that stores additional information
than what is required to perform their functions.
4. Control coupling: Two modules are said to be ‘control coupled’ if they
communicate (pass a piece of information intended to control the internal
logic) using at least one ‘control flag’. The control flag is a variable whose
value is used by the dependent modules to make decisions.
5. Content coupling: Two modules are said to be ‘content coupled’ if one
module modifies data of some other module or one module is under the
control of another module or one module branches into the middle of
another module.
6. Common coupling: Two modules are said to be ‘common coupled’ if they
both reference a common data block.

Cohesion

Cohesion measures the relative functional strength of a module. It represents the


strength of bond between the internal elements of the modules. The tighter the
elements are bound to each other, the higher will be the cohesion of a module.
In practice, designers should avoid a low level of cohesion when designing a
module. Generally, low coupling results in high cohesion and vice versa.
Various types of cohesion are listed below.

1. Functional cohesion: In this, the elements within the modules are


concerned with the execution of a single function.
2. Sequential cohesion: In this, the elements within the modules are involved
in activities in such a way that the output from one activity becomes the
input for the next activity.
3. Communicational cohesion: In this, the elements within the modules
perform different functions, yet each function references the same input or
output information.
4. Procedural cohesion: In this, the elements within the modules are involved
in different and possibly unrelated activities.
5. Temporal cohesion: In this, the elements within the modules contain
unrelated activities that can be carried out at the same time.
6. Logical cohesion: In this, the elements within the modules perform similar
activities, which are executed from outside the module.
7. Coincidental cohesion: In this, the elements within the modules perform
activities with no meaningful relationship to one another.

User Experience Design

“User Experience Design” is often used interchangeably with terms such as “User
Interface Design” and “Usability.” However, while usability and user interface (UI)
design are important aspects of UX design, they are subsets of it.

User interface is the front-end application view to which user interacts in order to
use the software. The software becomes more popular if its user interface is:

• Attractive
• Simple to use
• Responsive in short time
• Clear to understand
• Consistent on all interface screens

There are two types of User Interface:

1. Command Line Interface: Command Line Interface provides a command


prompt, where the user types the command and feeds to the system. The
user needs to remember the syntax of the command and its use.
2. Graphical User Interface: Graphical User Interface provides the simple
interactive interface to interact with the system. GUI can be a combination
of both hardware and software. Using GUI, user interprets the software.

User Interface Design Process:


The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface
consists of four framework activities.

1. User, task, environmental analysis, and modeling: Initially, the focus


is based on the profile of users who will interact with the system, i.e.
understanding, skill and knowledge, type of user, etc, based on the user’s
profile users are made into categories. From each category requirements
are gathered. Based on the requirements developer understand how to
develop the interface. Once all the requirements are gathered a detailed
analysis is conducted. In the analysis part, the tasks that the user performs
to establish the goals of the system are identified, described and
elaborated. The analysis of the user environment focuses on the physical
work environment. Among the questions to be asked are:
• Where will the interface be located physically?
• Will the user be sitting, standing, or performing other tasks unrelated
to the interface?
• Does the interface hardware accommodate space, light, or noise
constraints?
• Are there special human factors considerations driven by
environmental factors?
2. Interface Design: The goal of this phase is to define the set of interface
objects and actions i.e. Control mechanisms that enable the user to perform
desired tasks. Indicate how these control mechanisms affect the system.
Specify the action sequence of tasks and subtasks, also called a user
scenario. Indicate the state of the system when the user performs a
particular task. Always follow the three golden rules stated by Theo Mandel.
Design issues such as response time, command and action structure, error
handling, and help facilities are considered as the design model is refined.
This phase serves as the foundation for the implementation phase.
3. Interface construction and implementation: The implementation
activity begins with the creation of prototype (model) that enables usage
scenarios to be evaluated. As iterative design process continues a User
Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation: This phase focuses on testing the interface. The
interface should be in such a way that it should be able to perform tasks
correctly and it should be able to handle a variety of tasks. It should achieve
all the user’s requirements. It should be easy to use and easy to learn.
Users should accept the interface as a useful one in their work.

Golden Rules:

The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface. Place the user in control:

• Define the interaction modes in such a way that does not force the user
into unnecessary or undesired actions: The user should be able to easily
enter and exit the mode with little or no effort.
• Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some might
use mouse, some might use touch screen, etc, Hence all interaction
mechanisms should be provided.
• Allow user interaction to be interruptible and undoable: When a user is
doing a sequence of actions the user must be able to interrupt the sequence
to do some other work without losing the work that had been done. The
user should also be able to do undo operation.
• Streamline interaction as skill level advances and allow the interaction to
be customized: Advanced or highly skilled user should be provided a chance
to customize the interface as user wants which allows different interaction
mechanisms so that user doesn’t feel bored while using the same
interaction mechanism.
• Hide technical internals from casual users: The user should not be aware
of the internal technical details of the system. He should interact with the
interface just to do his work.
• Design for direct interaction with objects that appear on screen: The user
should be able to use the objects and manipulate the objects that are
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen.

Reduce the user’s memory load:


• Reduce demand on short-term memory: When users are involved in some
complex tasks the demand on short-term memory is significant. So the
interface should be designed in such a way to reduce the remembering of
previously done actions, given inputs and results.
• Establish meaningful defaults: Always initial set of defaults should be
provided to the average user, if a user needs to add some new features
then he should be able to add the required features.
• Define shortcuts that are intuitive: Mnemonics should be used by the user.
Mnemonics means the keyboard shortcuts to do some action on the screen.
• The visual layout of the interface should be based on a real-world
metaphor: Anything you represent on a screen if it is a metaphor for real-
world entity then users would easily understand.
• Disclose information in a progressive fashion: The interface should be
organized hierarchically i.e. on the main screen the information about the
task, an object or some behavior should be presented first at a high level
of abstraction. More detail should be presented after the user indicates
interest with a mouse pick.

Make the interface consistent:

• Allow the user to put the current task into a meaningful context: Many
interfaces have dozens of screens. So it is important to provide indicators
consistently so that the user know about the doing work. The user should
also know from which page has navigated to the current page and from the
current page where can navigate.
• Maintain consistency across a family of applications: The development of
some set of applications all should follow and implement the same design,
rules so that consistency is maintained among applications.
• If past interactive models have created user expectations do not make
changes unless there is a compelling reason.

User interface design is a crucial aspect of software engineering, as it is the means


by which users interact with software applications. A well-designed user interface
can improve the usability and user experience of an application, making it easier
to use and more effective.

There are several key principles that software engineers should follow
when designing user interfaces:

1. User-centered design: User interface design should be focused on the


needs and preferences of the user. This involves understanding the user’s
goals, tasks, and context of use, and designing interfaces that meet their
needs and expectations.
2. Consistency: Consistency is important in user interface design, as it helps
users to understand and learn how to use an application. Consistent design
elements such as icons, color schemes, and navigation menus should be
used throughout the application.
3. Simplicity: User interfaces should be designed to be simple and easy to
use, with clear and concise language and intuitive navigation. Users should
be able to accomplish their tasks without being overwhelmed by
unnecessary complexity.
4. Feedback: Feedback is important in user interface design, as it helps users
to understand the results of their actions and confirms that they are making
progress towards their goals. Feedback can take the form of visual cues,
messages, or sounds.
5. Accessibility: User interfaces should be designed to be accessible to all
users, regardless of their abilities. This involves considering factors such as
color contrast, font size, and assistive technologies such as screen readers.
6. Flexibility: User interfaces should be designed to be flexible and
customizable, allowing users to tailor the interface to their own preferences
and needs.

Overall, user interface design is a key component of software engineering, as it


can have a significant impact on the usability, effectiveness, and user experience
of an application. Software engineers should follow best practices and design
principles to create interfaces that are user-centered, consistent, simple, and
accessible.

Design for Mobility

Once a software system is in operation, parts of it may need to be redeployed, or


migrated in response to changes in the run time environment. The redeployment
of a software system’s components is a type of software system mobility. Mobile
computing involves the movement of human users together with their hosts
across different physical locations, while still being able to access an information
system. This is also referred to as physical mobility. If a piece of software moves
across hardware hosts during the systems execution, that action is referred to as
code mobility or logical mobility. If a software module that needs to be migrated
contains run time state, it is known as stateful mobility. If only the code needs to
be migrated, it is known as stateless mobility. Supporting stateful mobility is more
challenging since the mobile component’s state on the source host needs to be
captured, migrated and reconstituted on the destination host. The effect of the
component’s migration and its temporary downtime on the rest of the system and
its dependence on the components internal state must be considered.

There are three general classes of mobile code systems: remote evaluation, code-
on-demand and mobile agent.

1. Remote evaluation: A component on the source host has the know-how but
not the resources needed for performing a service. The component is transferred
to the destination host, where it is executed using the available resource. The
result is returned to the source host.

In remote evaluation a software component is:

• Redeployed at run-time from a source host to destination host


• Installed on destination host, ensuring that the software system’s
architectural configuration and any architectural constraints are preserved.
• Activated
• Executed to provide the desired service.
• Possibly de-activated and de-installed.

2. Code-on-Demand:

Here the needed resources are available locally, but the know-how is not known.
The local sub-system thus requests the component providing the know-how from
the appropriate remote host. The code-on-demand requires the same steps as
remote-evaluation, the only difference is that roles of target and destination hosts
are reserved.

3. Mobile agent:

If a component on a given host has the know-how for providing some service,
has some execution state and has access to some of the resources needed to
provide that service along with its state and local resources, may migrate to the
destination host, which may have remaining resources need for providing the
service. From a software architectural perspective, mobile agents are stateful
software components.

There are some factors which has to be taken care of while migrating the code,
some architectural concerns of which engineers must be aware are:

Quiescence: It may be unsafe to attempt to migrate a software component in


the middle of processing, while it is waiting for a result from another component,
or while other components are requesting its services. Therefore, the system
must provide facilities that allow temporary suspension of all interactions
originating from or targeted at the component, until the component is relocated
to a new host. Quiescence requires at least two capabilities. The first one must
be embodied in the component itself, allowing the system to instruct the
component to cease any autonomous processing. The second capability may
require that special-purpose elements be inserted into the system temporarily, to
insulate the component from outside request and route them for processing after
the component has been migrated.

Quality of Service: Migrating the components may result in degradation of


availability of the service if some changes occur in the system. Consider a system
which provides a given level of availability. If the stakeholders want to improve
the availability to a higher level by migrating some components to a different
host, it should be evaluated where the component should reside. Once the target
host is determined, quiescence is rendered to the mobile components which are
packaged for redeployment and migrated to target host. They are installed and
activated on target hosts, after which the system operates at higher availability
level until there occurs some changes in the system.

Pattern-Based Design

In software engineering, a software design pattern is a general, reusable solution


of how to solve a common problem when designing an application or system.
Unlike a library or framework, which can be inserted and used right away, a
design pattern is more of a template to approach the problem at hand.

Design patterns are used to support object-oriented programming (OOP), a


paradigm that is based on the concepts of both objects (instances of a class; data
with unique attributes) and classes (user-defined types of data).

Design patterns can be broken down into three types, organized by their intent
into creational design patterns, structural design patterns, and behavioral design
patterns.

1. Creational Design Patterns

A creational design pattern deals with object creation and initialization, providing
guidance about which objects are created for a given situation. These design
patterns are used to increase flexibility and to reuse existing code.

• Factory Method: Creates objects with a common interface and lets a class
defer instantiation to subclasses.
• Abstract Factory: Creates a family of related objects.
• Builder: A step-by-step pattern for creating complex objects, separating
construction and representation.
• Prototype: Supports the copying of existing objects without code
becoming dependent on classes.
• Singleton: Restricts object creation for a class to only one instance.

2. Structural Design Patterns

A structural design pattern deals with class and object composition, or how to
assemble objects and classes into larger structures.

• Adapter: How to change or adapt an interface to that of another existing


class to allow incompatible interfaces to work together.
• Bridge: A method to decouple an interface from its implementation.
• Composite: Leverages a tree structure to support manipulation as one
object.
• Decorator: Dynamically extends (adds or overrides) functionality.
• Façade: Defines a high-level interface to simplify the use of a large body
of code.
• Flyweight: Minimize memory use by sharing data with similar objects.
• Proxy: How to represent an object with another object to enable access
control, reduce cost and reduce complexity.

3. Behavioral Design Patterns

A behavioral design pattern is concerned with communication between objects


and how responsibilities are assigned between objects.

• Chain of Responsibility: A method for commands to be delegated to a


chain of processing objects.
• Command: Encapsulates a command request in an object.
• Interpreter: Supports the use of language elements within an application.
• Iterator: Supports iterative (sequential) access to collection elements.
• Mediator: Articulates simple communication between classes.
• Memento: A process to save and restore the internal/original state of an
object.
• Observer: Defines how to notify objects of changes to other object(s).
• State: How to alter the behavior of an object when its stage changes.
• Strategy: Encapsulates an algorithm inside a class.
• Visitor: Defines a new operation on a class without making changes to the
class.
• Template Method: Defines the skeleton of an operation while allowing
subclasses to refine certain steps.

Need for Design Patterns

Design patterns offer a best practice approach to support object-oriented


software design, which is easier to design, implement, change, test and reuse.
These design patterns provide best practices and structures.

1. Proven Solution - Design patterns provide a proven, reliable solution to a


common problem, meaning the software developer does not have to “reinvent
the wheel” when that problem occurs.

2. Reusable - Design patterns can be modified to solve many kinds of problems


– they are not just tied to a single problem.

3. Expressive - Design patterns are an elegant solution.


4. Prevent the Need for Refactoring Code - Since the design pattern is already
the optimal solution for the problem, this can avoid refactoring.

5. Lower the Size of the Codebase - Each pattern helps software developers
change how the system works without a full redesign. Further, as the “optimal”
solution, the design pattern often requires less code.
UNIT III SYSTEM DEPENDABILITY AND SECURITY

Dependable Systems

Dependable systems are desirable since they are “trustworthy,” as discussed in


the security communities and reliable engineering communities. Dependable
systems are typically characterized by the following attributes:

•Reliability: the system behaves as expected, with very few errors.

•Availability: the system and services are mostly available, with very little or
no down time.

•Safety: the systems do not pose unacceptable risks to the environment or the
health of users.

•Confidentiality: data and other information should not be divulged without intent
and authorization.

•Survivability: The system services should be robust enough to withstand


accidents and attacks.

•Integrity: System data should not be modified without intent and authorization.

•Maintainability: Maintenance of system hardware and services should not be


difficult or excessively expensive.

These attributes have some overlap among themselves. For example, just like
security, it is a weakest link phenomenon, in that the strength of the whole is
determined by the weakest link in the chain. Thus, for a product or system to be
considered dependable, it should possess all the aforementioned attributes.
Conversely, a system is not dependable in proportion to the degree of lack of
these dependability attributes. In most cases, dependability is also not a binary
phenomenon (present or absent) but based on gradations and acceptable
thresholds. These thresholds are specific to infrastructures such as electronic,
electromechanical, and quantum, as well as applications, such as
communications, process control, and data processing.

Among the dependability attributes, some need to be emphasized over others in


specific system applications. For example, in banking transactions, accuracy is
crucial, and if accuracy cannot be guaranteed, the transaction must be aborted
and rolled back. In contrast, sensors controlling a deep-sea oil rig may be large
in number, and the base station utilizes all the signals, including signals from
malfunctioning sensors and a composite picture is constructed from all available
data. Hundred percent accuracy can be sacrificed if sufficient degrees of
availability, survivability, and maintainability are achieved within a budget
threshold. Similarly, intelligence communications demand security and privacy
but might not be that concerned with delays of the order of seconds or minutes.

One of the keys for dependable systems is that they should be empirically
verifiable in terms of their dependability. That means that fashionable or trendy
methodologies that may be very popular need to be objectively assessed on the
basis of their true effectiveness. One of the measures for dependability is the
number of faults. Faults are errors in design or implementation that
cause failures. A failure is deemed to have occurred if any of the functional
specifications of the system are not met. Failures can range from minor to
catastrophic, depending upon the impact of failure on the system and the
immediate environment. Minor failures are referred to as errors. The underlying
faults may thus be prioritized, based on their potential impact. Lack of
dependability means that the system is undependable due to shortcoming in one
or more of the dependability attributes, caused by faults in the system and
potential cause of system failure.

Faults can manifest themselves during the operation of a system. Such faults are
known as active. Otherwise, the faults may be present and possibly manifest
themselves in the future. Such faults are referred to as dormant, and the purpose
of the testing phase in systems engineering is to discover as many dormant and
active faults as possible before deployment and general use of the tested system.

Dependability Properties

Principal properties:

• Availability: The probability that the system will be up and running and
able to deliver useful services to users.
• Reliability: The probability that the system will correctly deliver services
as expected by users.
• Safety: A judgment of how likely it is that the system will cause damage
to people or its environment.
• Security: A judgment of how likely it is that the system can resist
accidental or deliberate intrusions.
• Resilience: A judgment of how well a system can maintain the continuity
of its critical services in the presence of disruptive events such as
equipment failure and cyberattacks.

Other properties of software dependability:

• Repairability reflects the extent to which the system can be repaired in


the event of a failure;
• Maintainability reflects the extent to which the system can be adapted to
new requirements;
• Survivability reflects the extent to which the system can deliver services
whilst under hostile attack;
• Error tolerance reflects the extent to which user input errors can be
avoided and tolerated.

Many dependability attributes depend on one another. Safe system operation


depends on the system being available and operating reliably. A system may be
unreliable because its data has been corrupted by an external attack. Denial of
service attacks on a system are intended to make it unavailable. If a system is
infected with a virus, you cannot be confident in its reliability or safety.

How to achieve dependability?

• Avoid the introduction of accidental errors when developing the system.


• Design V & V processes that are effective in discovering residual errors in
the system.
• Design systems to be fault tolerant so that they can continue in operation
when faults occur.
• Design protection mechanisms that guard against external attacks.
• Configure the system correctly for its operating environment.
• Include system capabilities to recognize and resist cyberattacks.
• Include recovery mechanisms to help restore normal system service after
a failure.

Dependability costs tend to increase exponentially as increasing levels of


dependability are required because of two reasons. The use of more expensive
development techniques and hardware that are required to achieve the higher
levels of dependability. The increased testing and system validation that is
required to convince the system client and regulators that the required levels of
dependability have been achieved.

Socio-technical systems
Software engineering is not an isolated activity but is part of a broader systems
engineering process. Software systems are therefore not isolated systems but
are essential components of broader systems that have a human, social or
organizational purpose.

• Equipment: hardware devices, some of which may be computers; most


devices will include an embedded system of some kind.
• Operating system: provides a set of common facilities for higher levels in
the system.
• Communications and data management: middleware that provides
access to remote systems and databases.
• Application systems: specific functionality to meet some organization
requirements.
• Business processes: a set of processes involving people and computer
systems that support the activities of the business.
• Organizations: higher level strategic business activities that affect the
operation of the system.
• Society: laws, regulation and culture that affect the operation of the
system.

There are interactions and dependencies between the layers in a system and
changes at one level ripple through the other levels. For dependability, a
systems perspective is essential.

Redundancy and Diversity


Redundancy: Keep more than a single version of critical components so that if
one fails then a backup is available.
Diversity: Provide the same functionality in different ways in different
components so that they will not fail in the same way.
Redundant and diverse components should be independent so that they will not
suffer from 'common-mode' failures.

Process activities, such as validation, should not depend on a single approach,


such as testing, to validate the system. Redundant and diverse process activities
are important especially for verification and validation. Multiple, different process
activities the complement each other and allow for cross-checking help to avoid
process errors, which may lead to errors in the software.

Dependable processes

To ensure a minimal number of software faults, it is important to have a well-


defined, repeatable software process. A well-defined repeatable process is
one that does not depend entirely on individual skills; rather can be enacted by
different people. Regulators use information about the process to check if good
software engineering practice has been used. For fault detection, it is clear that
the process activities should include significant effort devoted to verification and
validation.

Dependable process characteristics:

Explicitly defined

A process that has a defined process model that is used to drive the software
production process. Data must be collected during the process that proves that
the development team has followed the process as defined in the process model.

Repeatable

A process that does not rely on individual interpretation and judgment. The
process can be repeated across projects and with different team members,
irrespective of who is involved in the development.

Dependable process activities

• Requirements reviews to check that the requirements are, as far as


possible, complete and consistent.
• Requirements management to ensure that changes to the requirements
are controlled and that the impact of proposed requirements changes is
understood.
• Formal specification, where a mathematical model of the software is
created and analyzed.
• System modeling, where the software design is explicitly documented as
a set of graphical models, and the links between the requirements and
these models are documented.
• Design and program inspections, where the different descriptions of the
system are inspected and checked by different people.
• Static analysis, where automated checks are carried out on the source
code of the program.
• Test planning and management, where a comprehensive set of system
tests is designed.

Formal Methods

Formal methods are techniques used by software engineers to design safety-


critical systems and their components. In software engineering, they are
techniques that involve mathematical expressions to model “abstract
representation” of the system.

Long story short – it uses mathematical rigour to describe/specify systems before


they get implemented.

Such models are subject to proof-check (Formal Specification) with regards to


stability, cohesion and reliability. Proving validation is a core process for
evaluating models using automatic theorem proofs. This is based on a set of
mathematical formulas to be proven called proof obligations (Formal Verification).
This allows identification of potential flaws earlier in the design stage, to prevent
from “bricking” expensive systems later when placed into exploitation. Standard
development techniques revolve around the following phases:

1. Requirements engineering
2. Architecture design
3. Implementation
4. Testing
5. Maintenance
6. Evolution

Some may argue that all these steps usually take place, but they must, to some
extent for at least usable software with longer perspectives for exploitation. Some
of the earlier steps – particularly design stages – may bring a sense of uncertainty
in terms of unforeseen problems later in the process. The reasons could be:

1. Lack of grasp of the problem as a whole


2. Dispersed engineering teams have different perceptions of the end-product
3. Lack of domain knowledge
4. Inconsistent requirements
5. Yet-to-be discovered areas of expertise
These are just some avoidable factors in the completion of complex projects.
Safety-critical systems, in particular, have a significant need for earlier fault
detection. It is crucial to validate software faultlessness where agile incremental
analysis and development bring about quality assurance concerns. Thus, that is
where the implementation of such techniques finds its highest demand.

There are notable differences between standard and formal software


development methods. Formal methods are somewhat supporting tools. Here,
the reliability of mathematics improves software production quality at any stage.
They are not necessarily there to implement data processing. Choice of
programming language is irrelevant. Instead, it creates a ‘bridge’ between
modelled concepts and the environment towards final software implementation:
“What shall we do?” over “How shall we do this?”.

Examples of Formal Method Techniques

B METHOD

B is an example of formal method techniques that covers the whole development


life-cycle. It divides software onto separated components that further represent
as Abstract Machines.

B methods represent system models in the form of mathematical expressions as


an Abstract Notation Machine (AMN). These are further subject to stepwise
refinement and proof obligation evaluation. This consists of verification of
invariant preservation and refinement correctness.

The B method is a widely-cited technique in scientific publications concerning


formal method implementation. Notably, it is used in the specification for
transport automation systems in Paris and Sao Paulo, by Siemens Transportation
Systems.

Z NOTATION

Z notation is a model-based, abstract formal specification technique most


compatible with object-oriented programming. Z defines system models in the
form of states where each state consists of variables, values and operations that
change from one state to another.

As opposed to the usability of B, which is involved in full development life-cycle,


Z formalizes a specification of the system at the design level.

EVENT-B

Event-B is an advanced implementation of the B method. Using this approach,


formal software specification is the process of creating a discrete model that
represents a specific state of the system. The state is an abstract representation
of constants, variables and transitions (events). Part of an event is the guard that
determines the condition for the transition to another other state to take place.
Constructed models (blueprints) are a further subject of refinement, proof
obligation and decomposition for the correctness of verification.

Evaluation

Before deciding on the use of formal methods, each architect must list the pros
and cons against resources available, as well as the system’s needs.

BENEFITS

1. Significantly improves reliability at the design level decreasing the cost of


testing
2. Improves system cohesion, reliability, and safety-critical components by
fault detection on early phases in the development cycle
3. Validated models present deterministic system behavior

Reliability Engineering

Reliability engineering is a sub-discipline of systems engineering that emphasizes


the ability of equipment to function without failure. Reliability describes the ability
of a system or component to function under stated conditions for a specified
period of time. Reliability is closely related to availability, which is typically
described as the ability of a component or system to function at a specified
moment or interval of time.

The reliability function is theoretically defined as the probability of success at time


t, which is denoted R(t). In practice, it is calculated using different techniques
and its value ranges between 0 and 1, where 0 indicates no probability of success
while 1 indicates definite success. This probability is estimated from detailed
(physics of failure) analysis, previous data sets or through reliability testing and
reliability modeling. Availability, testability, maintainability and maintenance are
often defined as a part of "reliability engineering" in reliability programs.
Reliability often plays the key role in the cost-effectiveness of systems.

Reliability engineering deals with the prediction, prevention and management of


high levels of "lifetime" engineering uncertainty and risks of failure. Although
stochastic parameters define and affect reliability, reliability is not only achieved
by mathematics and statistics. "Nearly all teaching and literature on the subject
emphasize these aspects, and ignore the reality that the ranges of uncertainty
involved largely invalidate quantitative methods for prediction and
measurement." For example, it is easy to represent "probability of failure" as a
symbol or value in an equation, but it is almost impossible to predict its true
magnitude in practice, which is massively multivariate, so having the equation for
reliability does not begin to equal having an accurate predictive measurement of
reliability.

Reliability engineering relates closely to Quality Engineering, safety engineering


and system safety, in that they use common methods for their analysis and may
require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on costs of failure caused by system downtime,
cost of spares, repair equipment, personnel, and cost of warranty claims.

Objectives

The objectives of reliability engineering, in decreasing order of priority, are:

1. To apply engineering knowledge and specialist techniques to prevent or to


reduce the likelihood or frequency of failures.
2. To identify and correct the causes of failures that do occur despite the
efforts to prevent them.
3. To determine ways of coping with failures that do occur, if their causes
have not been corrected.
4. To apply methods for estimating the likely reliability of new designs, and
for analyzing reliability data.

The reason for the priority emphasis is that it is by far the most effective way of
working, in terms of minimizing costs and generating reliable products. The
primary skills that are required, therefore, are the ability to understand and
anticipate the possible causes of failures, and knowledge of how to prevent them.
It is also necessary to have knowledge of the methods that can be used for
analyzing designs and data.

Availability and Reliability

Availability refers to the percentage of time a system is available to users.


Reliability refers to the likelihood that the system will meet a certain level of
performance based on user needs within a certain time frame.

What is Availability?

Availability, also known as operational availability, is the ability of a system to


deliver required services that are requested to do so. Availability ensures that an
application or service is continuously available to its users. The formal definition
would be – availability is the probability that a system, at a point in time, would
remain operational under normal circumstances in order to serve its intended
purpose. Simply put, it ensures that a system is able to perform required function
under given conditions at a given point of time. Service availability can be
quantified via a simple equation as service uptime divided by the sum of service
uptime and service downtime.
Availability = Uptime/ Uptime + Downtime

What is Reliability?

Reliability is the ability of a system to deliver services correctly under given


conditions for a given time period. Failures are inevitable in complex system; in
fact, both native and virtualized systems are subject to the same fundamental
error and failure scenarios. This takes into account the fact that if you’re using
the system for something it’s not designed to do, you don’t get the same level of
reliability. It is the degree of consistency of a measure. A system is said to be
reliable when it delivers the same repeated results under the same conditions at
any point of time, without failures. Reliability can be measured using a metric
called mean time between failures (MTBF). It’s a measure of how reliable a
product or system is.

MTBF = Operating Time (hours)/ Number of Failures


Reliability is the probability of failure-free system operation over a specified time
in a given environment for a given purpose. Availability is the probability that a
system, at a point in time, will be operational and able to deliver the requested
services. Both of these attributes can be expressed quantitatively e.g.
availability of 0.999 means that the system is up and running for 99.9% of the
time.

The formal definition of reliability does not always reflect the user's
perception of a system's reliability. Reliability can only be defined formally with
respect to a system specification i.e. a failure is a deviation from a
specification. Users don't read specifications and don't know how the system is
supposed to behave; therefore, perceived reliability is more important in practice.

Availability is usually expressed as a percentage of the time that the system


is available to deliver services e.g. 99.95%. However, this does not take into
account two factors:

• The number of users affected by the service outage. Loss of service in


the middle of the night is less important for many systems than loss of
service during peak usage periods.
• The length of the outage. The longer the outage, the more the disruption.
Several short outages are less likely to be disruptive than 1 long outage.
Long repair times are a particular problem.

Removing X% of the faults in a system will not necessarily improve the reliability
by X%. Program defects may be in rarely executed sections of the code so may
never be encountered by users. Removing these does not affect the perceived
reliability. Users adapt their behavior to avoid system features that may fail for
them. A program with known faults may therefore still be perceived as reliable
by its users.

Reliability Requirements

Functional reliability requirements define system and software functions that


avoid, detect or tolerate faults in the software and so ensure that these faults do
not lead to system failure.

Reliability is a measurable system attribute so non-functional reliability


requirements may be specified quantitatively. These define the number of
failures that are acceptable during normal use of the system or the time in
which the system must be available. Functional reliability requirements define
system and software functions that avoid, detect or tolerate faults in the software
and so ensure that these faults do not lead to system failure. Software reliability
requirements may also be included to cope with hardware failure or operator
error.
Reliability metrics are units of measurement of system reliability. System
reliability is measured by counting the number of operational failures and, where
appropriate, relating these to the demands made on the system and the time that
the system has been operational. Metrics include:

• Probability of failure on demand (POFOD). The probability that the


system will fail when a service request is made. Useful when demands for
service are intermittent and relatively infrequent.
• Rate of occurrence of failures (ROCOF). Reflects the rate of occurrence
of failure in the system. Relevant for systems where the system has to
process a large number of similar requests in a short time. Mean time to
failure (MTTF) is the reciprocal of ROCOF.
• Availability (AVAIL). Measure of the fraction of the time that the system
is available for use. Takes repair and restart time into account. Relevant
for non-stop, continuously running systems.

Non-functional reliability requirements are specifications of the required


reliability and availability of a system using one of the reliability metrics (POFOD,
ROCOF or AVAIL). Quantitative reliability and availability specification has been
used for many years in safety-critical systems but is uncommon for business-
critical systems. However, as more and more companies demand 24/7 service
from their systems, it makes sense for them to be precise about their reliability
and availability expectations.

Functional reliability requirements specify the faults to be detected and the


actions to be taken to ensure that these faults do not lead to system failures.

• Checking requirements that identify checks to ensure that incorrect data is


detected before it leads to a failure.
• Recovery requirements that are geared to help the system recover after a
failure has occurred.
• Redundancy requirements that specify redundant features of the system to
be included.
• Process requirements for reliability which specify the development process
to be used may also be included.

Fault-Tolerant Architectures

In critical situations, software systems must be fault tolerant. Fault tolerance is


required where there are high availability requirements or where system
failure costs are very high. Fault tolerance means that the system can continue
in operation in spite of software failure. Even if the system has been proved to
conform to its specification, it must also be fault tolerant as there may be
specification errors or the validation may be incorrect.
Fault-tolerant systems architectures are used in situations where fault
tolerance is essential. These architectures are generally all based on redundancy
and diversity. Examples of situations where dependable architectures are used:

• Flight control systems, where system failure could threaten the safety of
passengers;
• Reactor systems where failure of a control system could lead to a chemical
or nuclear emergency;
• Telecommunication systems, where there is a need for 24/7 availability.

Protection system is a specialized system that is associated with some other


control system, which can take emergency action if a failure occurs, e.g. a system
to stop a train if it passes a red light, or a system to shut down a reactor if
temperature/pressure are too high. Protection systems independently monitor
the controlled system and the environment. If a problem is detected, it issues
commands to take emergency action to shut down the system and avoid a
catastrophe. Protection systems are redundant because they include monitoring
and control capabilities that replicate those in the control software. Protection
systems should be diverse and use different technology from the control software.
They are simpler than the control system so more effort can be expended in
validation and dependability assurance. Aim is to ensure that there is a low
probability of failure on demand for the protection system.

Self-monitoring architecture is a multi-channel architecture where the system


monitors its own operations and takes action if inconsistencies are detected. The
same computation is carried out on each channel and the results are compared.
If the results are identical and are produced at the same time, then it is assumed
that the system is operating correctly. If the results are different, then a failure
is assumed and a failure exception is raised. Hardware in each channel has to be
diverse so that common mode hardware failure will not lead to each channel
producing the same results. Software in each channel must also be diverse,
otherwise the same software error would affect each channel. If high-availability
is required, you may use several self-checking systems in parallel. This is the
approach used in the Airbus family of aircraft for their flight control systems.

N-version programming involves multiple versions of a software system to


carry out computations at the same time. There should be an odd number of
computers involved, typically 3. The results are compared using a voting system
and the majority result is taken to be the correct result. Approach derived from
the notion of triple-modular redundancy, as used in hardware systems.

Hardware fault tolerance depends on triple-modular redundancy (TMR). There


are three replicated identical components that receive the same input and whose
outputs are compared. If one output is different, it is ignored and component
failure is assumed. Based on most faults resulting from component failures rather
than design faults and a low probability of simultaneous component failure.

Programming for reliability


Good programming practices can be adopted that help reduce the incidence of
program faults. These programming practices support fault avoidance, detection,
and tolerance.

Limit the visibility of information in a program

Program components should only be allowed access to data that they need for
their implementation. This means that accidental corruption of parts of the
program state by these components is impossible. You can control visibility by
using abstract data types where the data representation is private and you only
allow access to the data through predefined operations such as get () and put ().

Check all inputs for validity

All program take inputs from their environment and make assumptions about
these inputs. However, program specifications rarely define what to do if an input
is not consistent with these assumptions. Consequently, many programs behave
unpredictably when presented with unusual inputs and, sometimes, these are
threats to the security of the system. Consequently, you should always check
inputs before processing against the assumptions made about these inputs.

Provide a handler for all exceptions

A program exception is an error or some unexpected event such as a power


failure. Exception handling constructs allow for such events to be handled without
the need for continual status checking to detect exceptions. Using normal control
constructs to detect exceptions needs many additional statements to be added to
the program. This adds a significant overhead and is potentially error-prone.

Minimize the use of error-prone constructs

Program faults are usually a consequence of human error because programmers


lose track of the relationships between the different parts of the system This is
exacerbated by error-prone constructs in programming languages that are
inherently complex or that don't check for mistakes when they could do so.
Therefore, when programming, you should try to avoid or at least minimize the
use of these error-prone constructs.

Error-prone constructs:

• Unconditional branch (goto) statements


• Floating-point numbers (inherently imprecise, which may lead to invalid
comparisons)
• Pointers
• Dynamic memory allocation
• Parallelism (can result in subtle timing errors because of unforeseen
interaction between parallel processes)
• Recursion (can cause memory overflow as the program stack fills up)
• Interrupts (can cause a critical operation to be terminated and make a
program difficult to understand)
• Inheritance (code is not localized, which may result in unexpected behavior
when changes are made and problems of understanding the code)
• Aliasing (using more than 1 name to refer to the same state variable)
• Unbounded arrays (may result in buffer overflow)
• Default input processing (if the default action is to transfer control
elsewhere in the program, incorrect or deliberately malicious input can then
trigger a program failure)

Provide restart capabilities

For systems that involve long transactions or user interactions, you should always
provide a restart capability that allows the system to restart after failure without
users having to redo everything that they have done.

Check array bounds

In some programming languages, such as C, it is possible to address a memory


location outside of the range allowed for in an array declaration. This leads to the
well-known 'bounded buffer' vulnerability where attackers write executable code
into memory by deliberately writing beyond the top element in an array. If your
language does not include bound checking, you should therefore always check
that an array access is within the bounds of the array.

Include timeouts when calling external components

In a distributed system, failure of a remote computer can be 'silent' so that


programs expecting a service from that computer may never receive that service
or any indication that there has been a failure. To avoid this, you should always
include timeouts on all calls to external components. After a defined time period
has elapsed without a response, your system should then assume failure and take
whatever actions are required to recover from this.

Name all constants that represent real-world values

Always give constants that reflect real-world values (such as tax rates) names
rather than using their numeric values and always refer to them by name You are
less likely to make mistakes and type the wrong value when you are using a name
rather than a value. This means that when these 'constants' change (for sure,
they are not really constant), then you only have to make the change in one place
in your program.
Reliability Measurement

Reliability metrics are used to quantitatively expressed the reliability of the


software product. The option of which parameter is to be used depends upon
the type of system to which it applies & the requirements of the application
domain.

Measuring software reliability is a severe problem because we don't have a good


understanding of the nature of software. It is difficult to find a suitable method to
measure software reliability and most of the aspects connected to software
reliability. Even the software estimates have no uniform definition. If we cannot
measure the reliability directly, something can be measured that reflects the
features related to reliability.

The current methods of software reliability measurement can be divided


into four categories:

1. Product Metrics

Product metrics are those which are used to build the artifacts, i.e., requirement
specification documents, system design documents, etc. These metrics help in
the assessment if the product is right sufficient through records on attributes like
usability, reliability, maintainability & portability. In these measurements are
taken from the actual body of the source code.

i. Software size is thought to be reflective of complexity, development effort,


and reliability. Lines of Code (LOC), or LOC in thousands (KLOC), is an
initial intuitive approach to measuring software size. The basis of LOC is
that program length can be used as a predictor of program characteristics
such as effort &ease of maintenance. It is a measure of the functional
complexity of the program and is independent of the programming
language.
ii. Function point metric is a technique to measure the functionality of
proposed software development based on the count of inputs, outputs,
master files, inquires, and interfaces.
iii. Test coverage metric size fault and reliability by performing tests on
software products, assuming that software reliability is a function of the
portion of software that is successfully verified or tested.
iv. Complexity is directly linked to software reliability, so representing
complexity is essential. Complexity-oriented metrics is a way of
determining the complexity of a program's control structure by simplifying
the code into a graphical representation. The representative metric is
McCabe's Complexity Metric.
v. Quality metrics measure the quality at various steps of software product
development. An vital quality metric is Defect Removal Efficiency
(DRE). DRE provides a measure of quality because of different quality
assurance and control activities applied throughout the development
process.

2. Project Management Metrics

Project metrics define project characteristics and execution. If there is proper


management of the project by the programmer, then this helps us to achieve
better products. A relationship exists between the development process and the
ability to complete projects on time and within the desired quality objectives. Cost
increase when developers use inadequate methods. Higher reliability can be
achieved by using a better development process, risk management process,
configuration management process.

These metrics are:

o Number of software developers


o Staffing pattern over the life-cycle of the software
o Cost and schedule
o Productivity

3. Process Metrics

Process metrics quantify useful attributes of the software development process &
its environment. They tell if the process is functioning optimally as they report on
characteristics like cycle time & rework time. The goal of process metric is to do
the right job on the first time through the process. The quality of the product is
a direct function of the process. So process metrics can be used to estimate,
monitor, and improve the reliability and quality of software. Process metrics
describe the effectiveness and quality of the processes that produce the software
product.

Examples are:

o The effort required in the process


o Time to produce the product
o Effectiveness of defect removal during development
o Number of defects found during testing
o Maturity of the process

4. Fault and Failure Metrics

A fault is a defect in a program which appears when the programmer makes an


error and causes failure when executed under particular conditions. These metrics
are used to determine the failure-free execution software.

To achieve this objective, a number of faults found during testing and the failures
or other problems which are reported by the user after delivery are collected,
summarized, and analyzed. Failure metrics are based upon customer information
regarding faults found after release of the software. The failure data collected is
therefore used to calculate failure density, Mean Time between Failures
(MTBF), or other parameters to measure or predict software reliability.

Safety Engineering and Safety Engineering Processes

Safety engineering is an engineering discipline which assures that engineered


systems provide acceptable levels of safety. It is strongly related to industrial
engineering/systems engineering, and the subset system safety engineering.

Safety engineering processes are based on reliability engineering


processes. Regulators may require evidence that safety engineering
processes have been used in system development.

Agile methods are not usually used for safety-critical systems engineering.
Extensive process and product documentation is needed for system regulation,
which contradicts the focus in agile methods on the software itself. A detailed
safety analysis of a complete system specification is important, which contradicts
the interleaved development of a system specification and program. However,
some agile techniques such as test-driven development may be used.

Process assurance involves defining a dependable process and ensuring that


this process is followed during the system development. Process assurance
focuses on:
• Do we have the right processes? Are the processes appropriate for the
level of dependability required. Should include requirements management,
change management, reviews and inspections, etc.
• Are we doing the processes right? Have these processes been followed
by the development team.

Process assurance is important for safety-critical systems development:


accidents are rare events so testing may not find all problems; safety
requirements are sometimes 'shall not' requirements so cannot be demonstrated
through testing. Safety assurance activities may be included in the software
process that record the analyses that have been carried out and the people
responsible for these.

Safety-related process activities:

• Creation of a hazard logging and monitoring system;


• Appointment of project safety engineers who have explicit responsibility for
system safety;
• Extensive use of safety reviews;
• Creation of a safety certification system where the safety of critical
components is formally certified;
• Detailed configuration management.

Formal methods can be used when a mathematical specification of the system


is produced. They are the ultimate static verification technique that may be used
at different stages in the development process. A formal specification may be
developed and mathematically analyzed for consistency. This helps discover
specification errors and omissions. Formal arguments that a program conforms
to its mathematical specification may be developed. This is effective in
discovering programming and design errors.

Advantages of formal methods

Producing a mathematical specification requires a detailed analysis of the


requirements and this is likely to uncover errors. Concurrent systems can be
analyzed to discover race conditions that might lead to deadlock. Testing for such
problems is very difficult. They can detect implementation errors before testing
when the program is analyzed alongside the specification.

Disadvantages of formal methods

Require specialized notations that cannot be understood by domain experts. Very


expensive to develop a specification and even more expensive to show that a
program meets that specification. Proofs may contain errors. It may be possible
to reach the same level of confidence in a program more cheaply using other V &
V techniques.
Model checking involves creating an extended finite state model of a system
and, using a specialized system (a model checker), checking that model for
errors. The model checker explores all possible paths through the
model and checks that a user-specified property is valid for each path. Model
checking is particularly valuable for verifying concurrent systems, which are hard
to test. Although model checking is computationally very expensive, it is now
practical to use it in the verification of small to medium sized critical systems.

Static program analysis uses software tools for source text processing. They
parse the program text and try to discover potentially erroneous conditions and
bring these to the attention of the V & V team. They are very effective as an aid
to inspections - they are a supplement to but not a replacement for inspections.

Three levels of static analysis:

Characteristic error checking

The static analyzer can check for patterns in the code that are characteristic of
errors made by programmers using a particular language.

User-defined error checking

Users of a programming language define error patterns, thus extending the types
of error that can be detected. This allows specific rules that apply to a program
to be checked.

Assertion checking

Developers include formal assertions in their program and relationships that must
hold. The static analyzer symbolically executes the code and highlights potential
problems.

Static analysis is particularly valuable when a language such as C is used which


has weak typing and hence many errors are undetected by the compiler.
Particularly valuable for security checking - the static analyzer can discover areas
of vulnerability such as buffer overflows or unchecked inputs. Static analysis is
now routinely used in the development of many safety and security critical
systems.

Safety-critical systems

In safety-critical systems it is essential that system operation is always safe i.e.


the system should never cause damage to people or the system's environment.
Examples: control and monitoring systems in aircraft, process control systems in
chemical manufacture, automobile control systems such as braking and engine
management systems.
Two levels of safety criticality:

• Primary safety-critical systems: embedded software systems whose


failure can cause the associated hardware to fail and directly threaten
people.
• Secondary safety-critical systems: systems whose failure results in faults
in other (socio-technical) systems, which can then have safety
consequences.

Safety terminology

Term Definition
An unplanned event or sequence of events which results in human death
Accident
or injury, damage to property, or to the environment. An overdose of
(mishap)
insulin is an example of an accident.

Hazard A condition with the potential for causing or contributing to an accident.

A measure of the loss resulting from a mishap. Damage can range from
Damage many people being killed as a result of an accident to minor injury or
property damage.

An assessment of the worst possible damage that could result from a


Hazard
particular hazard. Hazard severity can range from catastrophic, where
severity
many people are killed, to minor, where only minor damage results.

The probability of the events occurring which create a hazard. Probability


Hazard values tend to be arbitrary but range from 'probable' (e.g. 1/100 chance
probability of a hazard occurring) to 'implausible' (no conceivable situations are likely
in which the hazard could occur).

This is a measure of the probability that the system will cause an accident.
Risk The risk is assessed by considering the hazard probability, the hazard
severity, and the probability that the hazard will lead to an accident.

Safety achievement strategies:

Hazard avoidance

The system is designed so that some classes of hazard simply cannot arise.

Hazard detection and removal


The system is designed so that hazards are detected and removed before they
result in an accident.

Damage limitation

The system includes protection features that minimize the damage that may
result from an accident.

Accidents in complex systems rarely have a single cause as these systems are
designed to be resilient to a single point of failure. Almost all accidents are a
result of combinations of malfunctions rather than single failures. It is probably
the case that anticipating all problem combinations, especially, in software-
controlled systems is impossible so achieving complete safety is impossible.
However, accidents are inevitable.

Safety Requirements

The goal of safety requirements engineering is to identify protection


requirements that ensure that system failures do not cause injury or death or
environmental damage. Safety requirements may be 'shall not' requirements i.e.
they define situations and events that should never occur. Functional safety
requirements define: checking and recovery features that should be included in a
system, and features that provide protection against system failures and external
attacks.

Hazard-driven analysis:

Hazard identification

Identify the hazards that may threaten the system. Hazard identification may be
based on different types of hazards: physical, electrical, biological, service
failure, etc.

Hazard assessment

The process is concerned with understanding the likelihood that a risk will
arise and the potential consequences if an accident or incident should occur.
Risks may be categorized as: intolerable (must never arise or result in an
accident), as low as reasonably practical - ALARP (must minimize the
possibility of risk given cost and schedule constraints), and acceptable (the
consequences of the risk are acceptable and no extra costs should be incurred to
reduce hazard probability).

The acceptability of a risk is determined by human, social, and political


considerations. In most societies, the boundaries between the regions are pushed
upwards with time i.e. society is less willing to accept risk (e.g., the costs of
cleaning up pollution may be less than the costs of preventing it but this may not
be socially acceptable). Risk assessment is subjective.

Hazard assessment process: for each identified hazard, assess hazard probability,
accident severity, estimated risk, acceptability.

Hazard analysis

Concerned with discovering the root causes of risks in a particular system.


Techniques have been mostly derived from safety-critical systems and can
be: inductive, bottom-up: start with a proposed system failure and assess the
hazards that could arise from that failure; and deductive, top-down: start with
a hazard and deduce what the causes of this could be.

Fault-tree analysis is a deductive top-down technique.:

• Put the risk or hazard at the root of the tree and identify the system states
that could lead to that hazard.
• Where appropriate, link these with 'and' or 'or' conditions.
• A goal should be to minimize the number of single causes of system failure.

Risk reduction

The aim of this process is to identify dependability requirements that specify


how the risks should be managed and ensure that accidents/incidents do not
arise. Risk reduction strategies: hazard avoidance; hazard detection and removal;
damage limitation.

Safety Cases

A safety case is a written proof that identifies the hazards and risks of a
manufactured product or installation. A safety case is a structured argument,
supported by evidence, intended to justify that a system is acceptably safe, and
when there is danger or damage to make it as low as reasonably possible
(ALARP). In industries like transportation and medicine, safety cases are
mandatory and legally binding. Safety cases tend to be presented in a document
of textual information and requirements accompanied by a graphical notation.
The most popular way to this graphical notation is using the Goal Structure
Notation (GSN). Even though a requirement in the automotive ISO 26262, the
GSN notation is not some farfetched complex. It is basically sets the goals, the
strategies justifying the claims and evidence, and a solution to make that goal
safe.

The elements of the Goal Structured Notation have a Symbol plus a count, and
are inside a shape. They are as following: (*N represents a number that grows
to N+1 on each preceding)
• A goal G(N), are rectangles, setting up and objective or sub objective of
the safety case.
• A strategy S(N), represented in a parallelogram, describes process or
inference between a goal and its supporting goal(s) and solutions.
• A solution Sn(N) shown inside a circle, demonstrates a reference or proof.
• A context C(N), shown like a square with curved edges. It defines the limits
that apply to the outlined structure.
• A justification J(N), rendered as an oval shows a rational or logical
statement
• An assumption A(N), also rendered as an oval, presents an intentionally
unsubstantiated statement.

So, considering how a safety case in the GSN notation is structured any program
that can make sketches like Microsoft Visio or mind map could work. But there is
a tool specifically for this it is called ASTHA-GSN. It has an student license, and
the tool has some major pluses:

• A simple easy to use user interface


• It will track the number of structures you have placed and sequentially
number them. Your first goal will automatically be G1, and then G2 and so
on
• It follows the structure, and knowing you are building a GSN safety case it
will tell you if what you are connecting is incorrect.
• It lets you color scheme
o All Goals as Blue
o Strategies as Green
o Solutions as red
o Contexts are yellow
o And justifications and Arguments are white and grey

Security Engineering

Security engineering is a sub-field of the broader field of computer security. It


encompasses tools, techniques and methods to support the development and
maintenance of systems that can resist malicious attacks that are intended to
damage a computer-based system or its data.

Dimensions of security:

• Confidentiality Information in a system may be disclosed or made


accessible to people or programs that are not authorized to have access to
that information.
• Integrity Information in a system may be damaged or corrupted making
it unusual or unreliable.
• Availability Access to a system or its data that is normally available may
not be possible.

Three levels of security:

• Infrastructure security is concerned with maintaining the security of all


systems and networks that provide an infrastructure and a set of shared
services to the organization.
• Application security is concerned with the security of individual
application systems or related groups of systems.
• Operational security is concerned with the secure operation and use of
the organization's systems.
Application security is a software engineering problem where the system is
designed to resist attacks. Infrastructure security is a systems management
problem where the infrastructure is configured to resist attacks.

System security management involves user and permission


management (adding and removing users from the system and setting up
appropriate permissions for users), software deployment and
maintenance (installing application software and middleware and configuring
these systems so that vulnerabilities are avoided), attack monitoring,
detection and recovery (monitoring the system for unauthorized access, design
strategies for resisting attacks and develop backup and recovery strategies).

Operational security is primarily a human and social issue, which is concerned


with ensuring the people do not take actions that may compromise system
security. Users sometimes take insecure actions to make it easier for them to do
their jobs. There is therefore a trade-off between system security and system
effectiveness.

Security and Dependability

The security of a system is a property that reflects the system's ability to


protect itself from accidental or deliberate external attack. Security is
essential as most systems are networked so that external access to the system
through the Internet is possible. Security is an essential pre-requisite for
availability, reliability and safety.

Reliability terminology

Term Description
Something of value which has to be protected. The asset may be
Asset
the software system itself or data used by that system.
An exploitation of a system's vulnerability. Generally, this is from
Attack outside the system and is a deliberate attempt to cause some
damage.
A protective measure that reduces a system's vulnerability.
Control Encryption is an example of a control that reduces a vulnerability of
a weak access control system.
Possible loss or harm to a computing system. This can be loss or
Exposure damage to data, or can be a loss of time and effort if recovery is
necessary after a security breach.
Circumstances that have potential to cause loss or harm. You can
Threat think of these as a system vulnerability that is subjected to an
attack.
A weakness in a computer-based system that may be exploited to
Vulnerability
cause loss or harm.

Four types of security threats:

• Interception threats that allow an attacker to gain access to an asset.


• Interruption threats that allow an attacker to make part of the system
unavailable.
• Modification threats that allow an attacker to tamper with a system asset.
• Fabrication threats that allow an attacker to insert false information into
a system.

Security assurance strategies:

Vulnerability avoidance

The system is designed so that vulnerabilities do not occur. For example, if there
is no external network connection then external attack is impossible.

Attack detection and elimination

The system is designed so that attacks on vulnerabilities are detected and


neutralized before they result in an exposure. For example, virus checkers find
and remove viruses before they infect a system.

Exposure limitation and recovery

The system is designed so that the adverse consequences of a successful attack


are minimized. For example, a backup policy allows damaged information to be
restored.

Security and attributes of dependability:

Security and reliability

If a system is attacked and the system or its data are corrupted as a consequence
of that attack, then this may induce system failures that compromise the
reliability of the system.

Security and availability

A common attack on a web-based system is a denial-of-service attack, where a


web server is flooded with service requests from a range of different sources. The
aim of this attack is to make the system unavailable.

Security and safety


An attack that corrupts the system or its data means that assumptions about
safety may not hold. Safety checks rely on analyzing the source code of safety
critical software and assume the executing code is a completely accurate
translation of that source code. If this is not the case, safety-related failures may
be induced and the safety case made for the software is invalid.

Security and resilience

Resilience is a system characteristic that reflects its ability to resist and recover
from damaging events. The most probable damaging event on networked
software systems is a cyberattack of some kind so most of the work now done in
resilience is aimed at deterring, detecting and recovering from such attacks.

Security and organizations

Security is expensive and it is important that security decisions are made in a


cost-effective way. There is no point in spending more than the value of an asset
to keep that asset secure. Organizations use a risk-based approach to support
security decision making and should have a defined security policy based on
security risk analysis. Security risk analysis is a business rather than a technical
process.

Security policies should set out general information access strategies that
should apply across the organization. The point of security policies is to inform
everyone in an organization about security so these should not be long and
detailed technical documents. From a security engineering perspective, the
security policy defines, in broad terms, the security goals of the organization. The
security engineering process is concerned with implementing these goals.

Security policies principles:

The assets that must be protected

It is not cost-effective to apply stringent security procedures to all organizational


assets. Many assets are not confidential and can be made freely available.

The level of protection that is required for different types of asset

For sensitive personal information, a high level of security is required; for other
information, the consequences of loss may be minor so a lower level of security
is adequate.

The responsibilities of individual users, managers and the organization

The security policy should set out what is expected of users e.g. strong
passwords, log out of computers, office security, etc.
Existing security procedures and technologies that should be maintained

For reasons of practicality and cost, it may be essential to continue to use existing
approaches to security even where these have known limitations.

Risk assessment and management is concerned with assessing the possible


losses that might ensue from attacks on the system and balancing these losses
against the costs of security procedures that may reduce these losses. Risk
management should be driven by an organizational security policy. Risk
management involves:

Preliminary risk assessment

The aim of this initial risk assessment is to identify generic risks that are
applicable to the system and to decide if an adequate level of security can be
achieved at a reasonable cost. The risk assessment should focus on the
identification and analysis of high-level risks to the system. The outcomes of the
risk assessment process are used to help identify security requirements.

Design risk assessment

This risk assessment takes place during the system development life cycle and is
informed by the technical system design and implementation decisions. The
results of the assessment may lead to changes to the security requirements and
the addition of new requirements. Known and potential vulnerabilities are
identified, and this knowledge is used to inform decision making about the system
functionality and how it is to be implemented, tested, and deployed.

Operational risk assessment

This risk assessment process focuses on the use of the system and the possible
risks that can arise from human behavior. Operational risk assessment should
continue after a system has been installed to take account of how the system is
used. Organizational changes may mean that the system is used in different ways
from those originally planned. These changes lead to new security requirements
that have to be implemented as the system evolves.

Security Requirements

Security specification has something in common with safety requirements


specification - in both cases, your concern is to avoid something bad happening.
Four major differences:

• Safety problems are accidental - the software is not operating in a


hostile environment. In security, you must assume that attackers have
knowledge of system weaknesses.
• When safety failures occur, you can look for the root cause or
weakness that led to the failure. When failure results from a deliberate
attack, the attacker may conceal the cause of the failure.
• Shutting down a system can avoid a safety-related failure. Causing a
shutdown may be the aim of an attack.
• Safety-related events are not generated from an intelligent
adversary. An attacker can probe defenses over time to discover
weaknesses.

Security requirement classification

• Risk avoidance requirements set out the risks that should be avoided by
designing the system so that these risks simply cannot arise.
• Risk detection requirements define mechanisms that identify the risk if it
arises and neutralize the risk before losses occur.
• Risk mitigation requirements set out how the system should be designed
so that it can recover from and restore system assets after some loss has
occurred.

Security risk assessment

• Asset identification: identify the key system assets (or services) that
have to be protected.
• Asset value assessment: estimate the value of the identified assets.
• Exposure assessment: assess the potential losses associated with each
asset.
• Threat identification: identify the most probable threats to the system
assets.
• Attack assessment: decompose threats into possible attacks on the
system and the ways that these may occur.
• Control identification: propose the controls that may be put in place to
protect an asset.
• Feasibility assessment: assess the technical feasibility and cost of the
controls.
• Security requirements definition: define system security requirements.
These can be infrastructure or application system requirements.

Misuse cases are instances of threats to a system:

• Interception threats: attacker gains access to an asset.


• Interruption threats: attacker makes part of a system unavailable.
• Modification threats: a system asset if tampered with.
• Fabrication threats: false information is added to a system.

Secure System Design

Security should be designed into a system - it is very difficult to make an


insecure system secure after it has been designed or implemented.

Adding security features to a system to enhance its security affects other


attributes of the system:

• Performance: additional security checks slow down a system so its


response time or throughput may be affected.
• Usability: security measures may require users to remember information
or require additional interactions to complete a transaction. This makes the
system less usable and can frustrate system users.
Design risk assessment is done while the system is being developed and after
it has been deployed. More information is available - system platform, middleware
and the system architecture and data organization. Vulnerabilities that arise from
design choices may therefore be identified.

During architectural design, two fundamental issues have to be considered


when designing an architecture for security:

Protection: how should the system be organized so that critical assets


can be protected against external attack?

Layered protection architecture:


Platform-level protection: top-level controls on the platform on which a
system runs.
Application-level protection: specific protection mechanisms built into the
application itself e.g., additional password protection.
Record-level protection: protection that is invoked when access to specific
information is requested.

Distribution: how should system assets be distributed so that the effects


of a successful attack are minimized?

Distributing assets means that attacks on one system do not necessarily lead to
complete loss of system service. Each platform has separate protection features
and may be different from other platforms so that they do not share a common
vulnerability. Distribution is particularly important if the risk of denial-of-service
attacks is high.

These are potentially conflicting. If assets are distributed, then they are more
expensive to protect. If assets are protected, then usability and performance
requirements may be compromised.

Design guidelines for security engineering: -

Design guidelines encapsulate good practice in secure systems design. Design


guidelines serve two purposes: they raise awareness of security issues in a
software engineering team, and they can be used as the basis of a review
checklist that is applied during the system validation process. Design guidelines
here are applicable during software specification and design.

o Base decisions on an explicit security policy - Define a security policy


for the organization that sets out the fundamental security requirements
that should apply to all organizational systems.
o Avoid a single point of failure - Ensure that a security failure can only
result when there is more than one failure in security procedures. For
example, have password and question-based authentication.
o Fail securely - When systems fail, for whatever reason, ensure that
sensitive information cannot be accessed by unauthorized users even
although normal security procedures are unavailable.
o Balance security and usability - Try to avoid security procedures that
make the system difficult to use. Sometimes you have to accept weaker
security to make the system more usable.
o Log user actions - Maintain a log of user actions that can be analyzed to
discover who did what. If users know about such a log, they are less likely
to behave in an irresponsible way.
o Use redundancy and diversity to reduce risk - Keep multiple copies of
data and use diverse infrastructure so that an infrastructure vulnerability
cannot be the single point of failure.
o Specify the format of all system inputs - If input formats are known
then you can check that all inputs are within range so that unexpected
inputs don't cause problems.
o Compartmentalize your assets - Organize the system so that assets are
in separate areas and users only have access to the information that they
need rather than all system information.
o Design for deployment - Design the system to avoid deployment
problems
o Design for recoverability - Design the system to simplify recoverability
after a successful attack.

Security Testing and Assurance

What is Security Testing?

Security Testing is a type of Software Testing that uncovers vulnerabilities,


threats, risks in a software application and prevents malicious attacks from
intruders. The purpose of Security Tests is to identify all possible loopholes and
weaknesses of the software system which might result in a loss of information,
revenue, repute at the hands of the employees or outsiders of the Organization.

Why Security Testing is Important?

The main goal of Security Testing is to identify the threats in the system and
measure its potential vulnerabilities, so the threats can be encountered and the
system does not stop functioning or cannot be exploited. It also helps in detecting
all possible security risks in the system and helps developers to fix the problems
through coding.

Types of Security Testing in Software Testing

There are seven main types of security testing as per Open-Source Security
Testing methodology manual. They are explained as follows:
• Vulnerability Scanning: This is done through automated software to scan
a system against known vulnerability signatures.
• Security Scanning: It involves identifying network and system
weaknesses, and later provides solutions for reducing these risks. This
scanning can be performed for both Manual and Automated scanning.
• Penetration testing: This kind of testing simulates an attack from a
malicious hacker. This testing involves analysis of a particular system to
check for potential vulnerabilities to an external hacking attempt.
• Risk Assessment: This testing involves analysis of security risks observed
in the organization. Risks are classified as Low, Medium and High. This
testing recommends controls and measures to reduce the risk.
• Security Auditing: This is an internal inspection of Applications and
Operating systems for security flaws. An audit can also be done via line-
by-line inspection of code
• Ethical hacking: It’s hacking an Organization Software system. Unlike
malicious hackers, who steal for their own gains, the intent is to expose
security flaws in the system.
• Posture Assessment: This combines Security scanning, Ethical Hacking
and Risk Assessments to show an overall security posture of an
organization.

How to do Security Testing

It is always agreed, that cost will be more if we postpone security testing after
software implementation phase or after deployment. So, it is necessary to involve
security testing in the SDLC life cycle in the earlier phases.

Let’s look into the corresponding Security processes to be adopted for every
phase in SDLC
SDLC Phases Security Processes
Security analysis for requirements and check abuse/misuse
Requirements
cases
Security risks analysis for designing. Development of Test
Design
Plan including security tests
Coding and Unit
Static and Dynamic Testing and Security White Box Testing
Testing
Integration Testing Black Box Testing
System Testing Black Box Testing and Vulnerability scanning
Implementation Penetration Testing, Vulnerability Scanning
Support Impact analysis of Patches

The test plan should include

• Security-related test cases or scenarios


• Test Data related to security testing
• Test Tools required for security testing
• Analysis of various tests outputs from different security tools

Example Test Scenarios for Security Testing

Sample Test scenarios to give you a glimpse of security test cases –

• A password should be in encrypted format


• Application or System should not allow invalid users
• Check cookies and session time for application
• For financial sites, the Browser back button should not work.

Methodologies/ Approach / Techniques for Security Testing

In security testing, different methodologies are followed, and they are as follows:

• Tiger Box: This hacking is usually done on a laptop which has a collection
of OSs and hacking tools. This testing helps penetration testers and security
testers to conduct vulnerabilities assessment and attacks.
• Black Box: Tester is authorized to do testing on everything about the
network topology and the technology.
• Grey Box: Partial information is given to the tester about the system, and
it is a hybrid of white and black box models.

Security Testing Roles

• Hackers – Access computer system or network without authorization


• Crackers – Break into the systems to steal or destroy data
• Ethical Hacker – Performs most of the breaking activities but with
permission from the owner
• Script Kiddies or packet monkeys – Inexperienced Hackers with
programming language skill

What is Software Security Assurance?

Software security assurance (SSA) is an approach to designing, building,


and implementing software that addresses security needs from the
ground up. Transparency is critical with SSA because it provides a high level of
trust that an application performs as intended without any unexpected functions
that could lead to security compromises.

The benefits of SSA extend from the companies that develop software to
the end users of that software. When procuring a third-party application, SSA
assures that you’re getting code built from the ground up with security in mind.

Today’s digitally-powered businesses often depend on integrating


multiple software components. Poor security in any of these components
could either bring the store offline or put customer data at risk
(see SolarWinds). Consider how an eCommerce company depends on a
website, an online store, analytics, CRM software, inventory management
software, and more.

For software-led businesses that sell software to other companies or users, SSA
increases trust in your code. And, when coding custom web applications in-
house for your own company’s use, SSA can significantly reduce the likelihood of
breaches or compromises from basic security mistakes.

It’s important not to confuse the concept of SSA with the popular idea of shifting
security to the left. Shift left guard mainly focuses on moving security checks and
tests to earlier phases of the development cycle.

SSA, however, is an entire secure-by-design ethos that evaluates


security concerns based on the software’s tasks, the data it will handle,
and the vulnerabilities that could be present.

Software security assurance also differs from quality assurance in that the latter
is about ensuring software engineering processes meet defined policies and
standards, usually through testing. Security assurance, on the other hand, is
all about ensuring that software conforms to its security requirements
and doesn’t include any functionality that could compromise security.

Resilience Engineering
The resilience of a system is a judgment of how well that system can maintain
the continuity of its critical services in the presence of disruptive events, such
as equipment failure and cyberattacks. This view encompasses these three ideas:

• Some of the services offered by a system are critical services whose


failure could have serious human, social or economic effects.
• Some events are disruptive and can affect the ability of a system to
deliver its critical services.
• Resilience is a judgment - there are no resilience metrics and resilience
cannot be measured. The resilience of a system can only be assessed by
experts, who can examine the system and its operational processes.

Resilience engineering places more emphasis on limiting the number of


system failures that arise from external events such as operator errors or
cyberattacks. Assumptions:

• It is impossible to avoid system failures and so is concerned


with limiting the costs of these failures and recovering from them.
• Good reliability engineering practices have been used to minimize the
number of technical faults in a system.

Four related resilience activities are involved in the detection and recovery
from system problems:

• The system or its operators should recognize early indications of system


failure.
• If the symptoms of a problem or cyberattack are detected early,
then resistance strategies may be used to reduce the probability that the
system will fail.
• If a failure occurs, the recovery activity ensures that critical system
services are restored quickly so that system users are not badly affected
by failure.
• In this final activity, all of the system services are restored and normal
system operation can continue.

Cybersecurity

Cybercrime is the illegal use of networked systems and is one of the most
serious problems facing our society. Cybersecurity is a broader topic than system
security engineering. Cybersecurity is a socio-technical issue covering all
aspects of ensuring the protection of citizens, businesses, and critical
infrastructures from threats that arise from their use of computers and the
Internet. Cybersecurity is concerned with all of an organization's IT assets from
networks through to application systems.

Factors contributing to cybersecurity failure:

• Organizational ignorance of the seriousness of the problem,


• Poor design and lax application of security procedures,
• Human carelessness,
• Inappropriate trade-offs between usability and security.

Cybersecurity threats:

• Threats to the confidentiality of assets: data is not damaged but it is


made available to people who should not have access to it.
• Threats to the integrity of assets: systems or data are damaged in some
way by a cyberattack.
• Threats to the availability of assets: aim to deny the use of assets by
authorized users.

Examples of controls to protect the assets:

• Authentication, where users of a system have to show that they are


authorized to access the system.
• Encryption, where data is algorithmically scrambled so that an
unauthorized reader cannot access the information.
• Firewalls, where incoming network packets are examined then accepted
or rejected according to a set of organizational rules.

Redundancy and diversity are valuable for cybersecurity resilience:

• Copies of data and software should be maintained on separate computer


systems (supports recovery and reinstatement).
• Multi-stage diverse authentication can protect against password attacks
(supports resistance).
• Critical servers may be over-provisioned i.e. they may be more powerful
than is required to handle their expected load (supports resistance).

Cyber resilience planning:

• Asset classification: the organization's hardware, software and human


assets are examined and classified depending on how essential they are to
normal operations.
• Threat identification: for each of the assets (or, at least the critical and
important assets), you should identify and classify threats to that asset.
• Threat recognition: for each threat or, sometimes asset/threat pair, you
should identify how an attack based on that threat might be recognized.
• Threat resistance: for each threat or asset/threat pair, you should
identify possible resistance strategies. These may be either embedded in
the system (technical strategies) or may rely on operational procedures.
• Asset recovery: for each critical asset or asset/threat pair, you should
work out how that asset could be recovered in the event of a successful
cyberattack.
• Asset reinstatement: this is a more general process of asset recovery
where you define procedures to bring the system back into normal
operation.

Socio-technical Resilience

Resilience engineering is concerned with adverse external events that can lead to
system failure. To design a resilient system, you have to think about socio-
technical systems design and not exclusively focus on software. Dealing with
these events is often easier and more effective in the broader socio-technical
system.

Four characteristics that reflect the resilience of an organization:


The ability to respond

Organizations have to be able to adapt their processes and procedures in


response to risks. These risks may be anticipated risks or may be detected threats
to the organization and its systems.

The ability to monitor

Organizations should monitor both their internal operations and their external
environment for threats before they arise.

The ability to anticipate

A resilient organization should not simply focus on its current operations but
should anticipate possible future events and changes that may affect its
operations and resilience.

The ability to learn

Organizational resilience can be improved by learning from experience. It is


particularly important to learn from successful responses to adverse events such
as the effective resistance of a cyberattack. Learning from success allows.

People inevitably make mistakes (human errors) that sometimes lead to serious
system failures. There are two ways to consider human error:

• The person approach. Errors are considered to be the responsibility of


the individual and 'unsafe acts' (such as an operator failing to engage a
safety barrier) are a consequence of individual carelessness or reckless
behavior.
• The systems approach. The basic assumption is that people are fallible
and will make mistakes. People make mistakes because they are under
pressure from high workloads, poor training or because of inappropriate
system design.

Systems engineers should assume that human errors will occur during system
operation. To improve the resilience of a system, designers have to think about
the defense and barriers to human error that could be part of a system. Can
these barriers should be built into the technical components of the system
(technical barriers)? If not, they could be part of the processes, procedures and
guidelines for using the system (socio-technical barriers).
Defensive layers have vulnerabilities: they are like slices of Swiss cheese
with holes in the layer corresponding to these vulnerabilities. Vulnerabilities are
dynamic: the 'holes' are not always in the same place and the size of the holes
may vary depending on the operating conditions. System failures occur when the
holes line up and all of the defenses fail.

Strategies to increase system resilience:

• Reduce the probability of the occurrence of an external event that


might trigger system failures.
• Increase the number of defensive layers. The more layers that you
have in a system, the less likely it is that the holes will line up and a system
failure occur.
• Design a system so that diverse types of barriers are included. The
'holes' will probably be in different places and so there is less chance of the
holes lining up and failing to trap an error.
• Minimize the number of latent conditions in a system. This means
reducing the number and size of system 'holes'.

Resilient systems design

Designing systems for resilience involves two streams of work:

• Identifying critical services and assets that allow a system to fulfill its
primary purpose.
• Designing system components that support problem recognition,
resistance, recovery and reinstatement.

Survivable systems analysis


• System understanding: for an existing or proposed system, review the
goals of the system (sometimes called the mission objectives), the system
requirements and the system architecture.
• Critical service identification: the services that must always be
maintained and the components that are required to maintain these
services are identified.
• Attack simulation: scenarios or use cases for possible attacks are
identified along with the system components that would be affected by
these attacks.
• Survivability analysis: components that are both essential and
comprisable by an attack are identified and survivability strategies based
on resistance, recognition and recovery are identified.
UNIT IV SERVICE-ORIENTED SOFTWARE ENGINEERING, SYSTEMS
ENGINEERING AND REAL-TIME SOFTWARE ENGINEERING

Service-oriented Architecture

Service-Oriented Architecture (SOA) is a stage in the evolution of application


development and/or integration. It defines a way to make software components
reusable using the interfaces.

Formally, SOA is an architectural approach in which applications make use of


services available in the network. In this architecture, services are provided to
form applications, through a network call over the internet. It uses common
communication standards to speed up and streamline the service integrations in
applications. Each service in SOA is a complete business function in itself. The
services are published in such a way that it makes it easy for the developers to
assemble their apps using those services. Note that SOA is different from
microservice architecture.

• SOA allows users to combine a large number of facilities from existing


services to form applications.
• SOA encompasses a set of design principles that structure system
development and provide means for integrating components into a
coherent and decentralized system.
• SOA-based computing packages functionalities into a set of interoperable
services, which can be integrated into different software systems belonging
to separate business domains.

The different characteristics of SOA are as follows:


o Provides interoperability between the services.
o Provides methods for service encapsulation, service discovery, service
composition,
service reusability and service integration.
o Facilitates QoS (Quality of Services) through service contract based on Service
Level
Agreement (SLA).
o Provides loosely couples services.
o Provides location transparency with better scalability and availability.
o Ease of maintenance with reduced cost of application development and
deployment.
There are two major roles within Service-oriented Architecture:

1. Service provider: The service provider is the maintainer of the service


and the organization that makes available one or more services for others
to use. To advertise services, the provider can publish them in a registry,
together with a service contract that specifies the nature of the service,
how to use it, the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata
in the registry and develop the required client components to bind and use
the service.

Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request of a given service consumer.
This practice is known as service orchestration Another important interaction
pattern is service choreography, which is the coordinated interaction of services
without a single point of control.

Components of SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service
description documents.
2. Loose coupling: Services are designed as self-contained components,
maintain relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and
description documents. They hide their logic, which is encapsulated within
their implementation.
4. Reusability: Designed as components, services can be reused more
effectively, thus reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and,
from a service consumer point of view, there is no need to know about their
implementation.
6. Discoverability: Services are defined by description documents that
constitute supplemental metadata through which they can be effectively
discovered. Service discovery provides an effective means for utilizing
third-party resources.
7. Composability: Using services as building blocks, sophisticated and
complex operations can be implemented. Service orchestration and
choreography provide a solid support for composing services and achieving
business goals.

Advantages of SOA:

• Service reusability: In SOA, applications are made from existing services.


Thus, services can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can
be updated and modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by
combining services picked from different sources, independent of the
platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug
small services rather than huge codes
• Scalability: Services can run on different servers within an environment,
this increases scalability

Disadvantages of SOA:

• High overhead: A validation of input parameters of services is done


whenever services interact this decreases performance as it increases load
and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact they exchange
messages to tasks. the number of messages may go in millions. It becomes
a cumbersome task to handle a large number of messages.
Practical applications of SOA: SOA is used in many ways around us whether
it is mentioned or not.

1. SOA infrastructure is used by many armies and air forces to deploy


situational awareness systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For
example, an app might need GPS so it uses the inbuilt GPS functions of the
device. This is SOA in mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their
information and content.

RESTful Services

REST or Representational State Transfer is an architectural style that can be


applied to web services to create and enhance properties like performance,
scalability, and modifiability. RESTful web services are generally highly scalable,
light, and maintainable and are used to create APIs for web-based applications.
It exposes API from an application in a secure and stateless manner to the client.
The protocol for REST is HTTP. In this architecture style, clients and servers use
a standardized interface and protocol to exchange representation of resources.

REST emerged as the predominant Web service design model just a couple of
years after its launch, measured by the number of Web services that use it. Owing
to its more straightforward style, it has mostly displaced SOAP and WSDL-based
interface design.

REST became popular due to the following reasons:

1. It allows web applications built using different programming languages to


communicate with each other. Also, web applications may reside in
different environments, like on Windows, or for example, Linux.
2. Mobile devices have become more popular than desktops. Using REST, you
don’t need to worry about the underlying layer for the device. Therefore, it
saves the amount of effort it would take to code applications on mobiles to
talk with normal web applications.
3. Modern applications have to be made compatible with the Cloud. As Cloud-
based architectures work using the REST principle, it makes sense for web
services to be programmed using the REST service-based architecture.

RESTful Architecture:

1. Division of State and Functionality: State and functionality are divided


into distributed resources. This is because every resource has to be
accessible via normal HTTP commands. That means a user should be able
to issue the GET request to get a file, issue the POST or PUT request to put
a file on the server, or issue the DELETE request to delete a file from the
server.
2. Stateless, Layered, Caching-Support, Client/Server Architecture: A
type of architecture where the web browser acts as the client, and the web
server acts as the server hosting the application, is called a client/server
architecture. The state of the application should not be maintained by
REST. The architecture should also be layered, meaning that there can be
intermediate servers between the client and the end server. It should also
be able to implement a well-managed caching mechanism.

Principles of RESTful applications:

1. URI Resource Identification: A RESTful web service should have a set


of resources that can be used to select targets of interactions with clients.
These resources can be identified by URI (Uniform Resource Identifiers).
The URIs provide a global addressing space and help with service discovery.
2. Uniform Interface: Resources should have a uniform or fixed set of
operations, such as PUT, GET, POST, and DELETE operations. This is a key
principle that differentiates between a REST web service and a non-REST
web service.
3. Self-Descriptive Messages: As resources are decoupled from their
representation, content can be accessed through a large number of formats
like HTML, PDF, JPEG, XML, plain text, JSON, etc. The metadata of the
resource can be used for various purposes like control caching, detecting
transmission errors, finding the appropriate representation format, and
performing authentication or access control.
4. Use of Hyperlinks for State Interactions: In REST, interactions with a
resource are stateless, that is, request messages are self-contained. So
explicit state transfer concept is used to provide stateful interactions. URI
rewriting, cookies, and form fields can be used to implement the exchange
of state. A state can also be embedded in response messages and can be
used to point to valid future states of interaction.

Advantages of RESTful web services:

1. Speed: As there is no strict specification, RESTful web services are faster


as compared to SOAP. It also consumes fewer resources and bandwidth.
2. Compatible with SOAP: RESTful web services are compatible with SOAP,
which can be used as the implementation.
3. Language and Platform Independency: RESTful web services can be
written in any programming language and can be used on any platform.
4. Supports Various Data Formats: It permits the use of several data
formats like HTML, XML, Plain Text, JSON, etc.

Service Engineering
Service engineering is the process of developing services for reuse in service-
oriented applications. The service has to be designed as a reusable abstraction
that can be used in different systems. Generally useful functionality associated
with that abstraction must be designed and the service must be robust and
reliable. The service must be documented so that it can be discovered and
understood by potential users.

Stages of service engineering include:

• Service candidate identification, where you identify possible services


that might be implemented and define the service requirements. It involves
understanding an organization's business processes to decide which
reusable services could support these processes. Three fundamental
types of service:
o Utility services that implement general functionality used by
different business processes.
o Business services that are associated with a specific business
function e.g., in a university, student registration.
o Coordination services that support composite processes such as
ordering.
• Service design, where you design the logical service interface and its
implementation interfaces (SOAP and/or RESTful). Involves thinking about
the operations associated with the service and the messages exchanged.
The number of messages exchanged to complete a service request should
normally be minimized. Service state information may have to be included
in messages. Interface design stages:
o Logical interface design. Starts with the service requirements and
defines the operation names and parameters associated with the
service. Exceptions should also be defined.
o Message design (SOAP). For SOAP-based services, design the
structure and organization of the input and output messages.
Notations such as the UML are a more abstract representation than
XML. The logical specification is converted to a WSDL description.
o Interface design (REST). Design how the required operations map
onto REST operations and what resources are required.
• Service implementation and deployment, where you implement and
test the service and make it available for use. Programming services using
a standard programming language or a workflow language. Services then
have to be tested by creating input messages and checking that the output
messages produced are as expected. Deployment involves publicizing the
service and installing it on a web server. Current servers provide support
for service installation.

Service Composition
Existing services are composed and configured to create new composite services
and applications. The basis for service composition is often a workflow. Workflows
are logical sequences of activities that, together, model a coherent business
process. For example, provide a travel reservation services which allows flights,
car hire and hotel bookings to be coordinated.

Service construction by composition:


Formulate outline workflow
In this initial stage of service design, you use the requirements for the
composite service as a basis for creating an 'ideal' service design.
Discover services
During this stage of the process, you search service registries or catalogs
to discover what services exist, who provides these services and the details
of the service provision.
Select possible services
Your selection criteria will obviously include the functionality of the services
offered. They may also include the cost of the services and the quality of
service (responsiveness, availability, etc.) offered.
Refine workflow
This involves adding detail to the abstract description and perhaps adding
or removing workflow activities.
Create workflow program
During this stage, the abstract workflow design is transformed to an
executable program and the service interface is defined. You can use a
conventional programming language, such as Java or a workflow language,
such as WS-BPEL.
Test completed service or application
The process of testing the completed, composite service is more complex
than component testing in situations where external services are used.

Systems Engineering
Systems Engineering is an engineering field that takes an interdisciplinary
approach to product development. Systems engineers analyze the collection of
pieces to make sure when working together, they achieve the intended objectives
or purpose of the product. For example, in automotive development, a propulsion
system or braking system will involve mechanical engineers, electrical engineers,
and a host of other specialized engineering disciplines. A systems engineer will
focus on making each of the individual systems work together into an integrated
whole that performs as expected across the lifecycle of the product.

What are the fundamentals of systems engineering?

In product development, systems engineering is the interdisciplinary field that


focuses on designing, integrating, and managing the systems that work together
to form a more complex system. Systems engineering is based around systems
thinking principles, and the goal of a systems engineer is to help a product team
produce an engineered system that performs a useful function as defined by the
requirements written at the beginning of the project. The final product should be
one where the individual systems work together in a cohesive whole that meet
the requirements of the product.
What is the role of a systems engineer?

A systems engineer is tasked with looking at the entire integrated system and
evaluating it against its desired outcomes. In that role, the systems engineer
must know a little bit about everything and have an ability to see the “big picture.”
While specialists can focus on their specific disciplines, the systems engineer must
evaluate the complex system as a whole against the initial requirements and
desired outcomes.

Systems engineers have multi-faceted roles to play but primarily assist with:

• Design compatibility
• Definition of requirements
• Management of projects
• Cost analysis
• Scheduling
• Possible maintenance needs
• Ease of operations
• Future systems upgrades
• Communication among engineers, managers, suppliers, and customers in
regards to the system’s operations

What is the Systems Engineering Process?

The systems engineering process can take a top-down approach, bottoms up, or
middle out depending on the system being developed. The process encompasses
all creative, manual, and technical activities necessary to define the ultimate
outcomes and see that the development process results in a product that meets
objectives.

The process typically has four basic steps:

• Task definition/analysis/conceptual: In this step, the systems


engineer works with stakeholders to understand their needs and
constraints. This stage could be considered a creative or idea stage where
brainstorming takes place and market analysis and end user desires are
included.
• Design/requirements: In this phase, individual engineers and team
members analyze the needs in step 1 and translate them into requirements
that describe how the system needs to work. The systems engineer
evaluates the systems as a whole and offers feedback to improve
integration and overall design.
• Create traceability: Although we’re listing traceability here as the third
step, traceability is actually created throughout the lifecycle of development
and is not a discrete activity taking place during one phase. Throughout the
lifecycle of development, the team works together to design individual
systems that will integrate into one cohesive whole. The systems engineer
helps manage traceability and integration of the individual systems.
• Implementation/market launch: When everyone has executed their
roles properly, the final product is manufactured or launched with the
assurance that it will operate as expected in a complex system throughout
its anticipated life cycle.

Socio-technical systems

Socio-technical systems are large-scale systems that do not just include software
and hardware but also people, processes and organizational policies. Socio-
technical systems are often 'systems of systems' i.e., are made up of a number
of independent systems. The boundaries of socio-technical system are subjective
rather than objective: different people see the system in different ways.

Socio-technical systems are used within organizations and are therefore


profoundly affected by the organizational environment in which they are used.
Failure to take this environment into account when designing the system is likely
to lead to user dissatisfaction and system rejection.

There are a number of key elements in an organization that may affect the
requirements, design, and operation of a socio-technical system. A new system
may lead to changes in some or all of these elements:

• Process changes: Systems may require changes to business processes so


training may be required. Significant changes may be resisted by users.
• Job changes: Systems may de-skill users or cause changes to the way
they work. The status of individuals may be affected by a new system.
• Organizational policies: The proposed system may not be consistent
with current organizational policies.
• Organizational politics: Systems may change the political power
structure in an organization. Those that control the system have more
power.

A complex system may include software, mechanical, electrical and electronic


hardware and be operated by people. System components are dependent on
other
system components. The properties and behavior of system components are
inextricably inter-mingled. This leads to complexity. Complexity is the reason why
socio-technical systems have emergent properties, are non-deterministic and
have subjective success criteria:

• Emergent properties: Properties of the system of a whole that depend


on the system components and their relationships.
• Non-deterministic: They do not always produce the same output when
presented with the same input because the system’s behavior is partially
dependent on human operators.
• Complex relationships with organizational objectives: The extent to
which the system supports organizational objectives does not just depend
on the system itself.

Emergent properties are properties of the system as a whole rather than


properties that can be derived from the properties of components of a system.
Emergent properties are a consequence of the relationships between system
components. They can therefore only be assessed and measured once the
components have been integrated into a system.

Two types of emergent properties:

• Functional properties: These appear when all the parts of a system work
together to achieve some objective. For example, a bicycle has the
functional property of being a transportation device once it has been
assembled from its components.
• Non-functional emergent properties: Examples are reliability,
performance, safety, and security. These relate to the behavior of the
system in its operational environment. They are often critical for computer-
based systems as failure to achieve some minimal defined level in these
properties may make the system unusable.

System reliability is a good example of an emergent property. Because of


component inter-dependencies,
faults can be propagated through the system. System failures often occur
because of unforeseen inter-relationships between components. It is practically
impossible to anticipate all possible component relationships. Software reliability
measures may give a false
picture of the overall system reliability.

System reliability is influenced by:

• Hardware reliability: What is the probability of a hardware component


failing and how long does it take to repair that component?
• Software reliability: How likely is it that a software component will
produce an incorrect output. Software failure is usually distinct from
hardware failure in that software does not wear out.
• Operator reliability: How likely is it that the operator of a system will
make an error?

Failures are not independent and they propagate from one level to another.

System reliability depends on the context where the system is used. A system
that is reliable in one environment may be less reliable in a different environment
because the physical conditions (e.g., the temperature) and the mode of
operation is different.

A deterministic system is one where a given sequence of inputs will always


produce the same sequence of outputs. Software systems are deterministic;
systems that include humans are non-deterministic. A socio-technical
system will not always produce the same sequence of outputs from the same
input sequence:

• Human elements: People do not always behave in the same way.


• System changes: System behavior is unpredictable because of frequent
changes to hardware, software and data.
Complex systems are developed to address 'wicked problems' - problems
where there cannot be a complete specification. Different stakeholders see the
problem in different ways and each has a partial understanding of the issues
affecting the system. Consequently, different stakeholders have their own views
about whether or not a system is 'successful'. Success is a judgment and cannot
be objectively measured. Success is judged using the effectiveness of the system
when deployed rather than judged against the original reasons for procurement.

Conceptual design

Conceptual design investigates the feasibility of an idea and develops that idea
to create an overall vision of a system. Conceptual design precedes and overlaps
with requirements engineering. May involve discussions with users and other
stakeholders and the identification of critical requirements. The aim of conceptual
design is to create a high-level system description that communicates the system
purpose to non-technical decision makers.

Conceptual design activities:

• Concept formulation: Refine an initial statement of needs and work out


what type of system is most likely to meet the needs of system
stakeholders.
• Problem understanding: Discuss with stakeholders how they do their
work, what is and isn't important to them, what they like and don't like
about existing systems.
• System proposal development: Set out ideas for possible systems
(maybe more than one).
• Feasibility study: Look at comparable systems that have been developed
elsewhere (if any) and assess whether or not the proposed system could
be implemented using current hardware and software technologies.
• System structure development: Develop an outline architecture for the
system, identifying (where appropriate) other systems that may be reused.
• System vision document: Document the results of the conceptual design
in a readable, non-technical way. Should include a short summary and
more detailed appendices.

System Procurement

System procurement is the process of acquiring a system (or systems) to meet


some identified organizational need. Before procurement, decisions are made
on: scope of the system, system budgets and timescales, high-level
system requirements. Based on this information, decisions are made on
whether to procure a system, the type of system and the potential system
suppliers. These decisions are driven by:

• The state of other organizational systems and whether or not they need to
be replaced
• The need to comply with external regulations
• External competition
• Business re-organization
• Available budget

It is usually necessary to develop a conceptual design document and high-level


requirements before procurement. You need a specification to let a contract for
system development. The specification may allow you to buy a commercial off-
the-shelf (COTS) system. Almost always cheaper than developing a system from
scratch. Large complex systems usually consist of a mix of off the shelf and
specially designed components. The procurement processes for these different
types of components are usually different.

Three types of systems or system components may have to be procured:

• Off-the-shelf applications that may be used without change and which need
only minimal configuration for use.
• Configurable application or ERP systems that have to be modified or
adapted for use either by modifying the code or by using inbuilt
configuration features, such as process definitions and rules.
• Custom systems that have to be designed and implemented specially for
use.

Issues with system procurement:


• Organizations often have an approved and recommended set of application
software that has been checked by the IT department. It is usually possible
to buy or acquire open-source software from this set directly without the
need for detailed justification. There are no detailed requirements and the
users adapt to the features of the chosen application.
• Off-the-shelf components do not usually match requirements exactly.
Choosing a system means that you have to find the closest match between
the system requirements and the facilities offered by off-the-shelf systems.
• When a system is to be built specially, the specification of requirements is
part of the contract for the system being acquired. It is therefore a legal as
well as a technical document. The requirements document is critical and
procurement processes of this type usually take a considerable amount of
time.
• For public sector systems especially, there are detailed rules and
regulations that affect the procurement of systems. These force the
development of detailed requirements and make agile development
difficult.
• For application systems that require change or for custom systems there is
usually a contract negotiation period where the customer and supplier
negotiate the terms and conditions for the development of the system.
During this process, requirements changes may be agreed to reduce the
overall costs and avoid some development problems.

System Development

System development usually follows a plan-driven approach because of the need


for parallel development of different parts of the system. Little scope for iteration
between phases because hardware changes are very expensive. Software may
have to compensate for hardware problems. Inevitably involves engineers from
different disciplines who must work together. Much scope for misunderstanding
here. Different disciplines use a different vocabulary and much negotiation is
required. Engineers may have personal agendas to fulfil.

The system development process:

• Requirements engineering: The process of refining, analyzing and


documenting the high-level and business requirements identified in the
conceptual design.
• Architectural design: Establishing the overall architecture of the system,
identifying components and their relationships.
• Requirements partitioning: Deciding which subsystems (identified in the
system architecture) are responsible for implementing the system
requirements.
• Subsystem engineering: Developing the software components of the
system, configuring off-the-shelf hardware and software, defining the
operational processes for the system and re-designing business processes.
• System integration: Putting together system elements to create a new
system.
• System testing: The whole system is tested to discover problems.
• System deployment: the process of making the system available to its
users, transferring data from existing systems and establishing
communications with other systems in the environment.

Requirements engineering and system design are inextricably linked.


Constraints posed by the system's environment and other systems limit design
choices so the actual design to be used may be a requirement. Initial design may
be necessary to structure the requirements. As you do design, you learn more
about the requirements.

Subsystem engineering may involve some application systems procurement.


Typically, parallel projects developing the hardware, software and
communications. Lack of communication across implementation teams can cause
problems. There may be a bureaucratic and slow mechanism for
proposing system changes, which means that the development schedule may be
extended because of the need for rework.

System integration is the process of putting hardware, software and


people together to make a system. Should ideally be tackled incrementally so
that sub-systems are integrated one at a time. The system is tested as it is
integrated. Interface problems between sub-systems are usually found at this
stage. May be problems with uncoordinated deliveries
of system components.

System delivery and deployment takes place after completion, when the
system has to be installed in the customer's environment. A number of issues can
occur:

• Environmental assumptions may be incorrect;


• May be human resistance to the introduction of a new system;
• System may have to coexist with alternative systems for some time;
• May be physical installation problems (e.g., cabling problems);
• Data cleanup may be required;
• Operator training has to be identified.

System Operation and Evolution

Operational processes are the processes involved in using the system for its
defined purpose. For new systems, these processes may have to be designed and
tested and operators trained in the use of the system. Operational processes
should be flexible to allow operators to cope with problems and periods of
fluctuating workload.
Problems with operation automation:

• It is likely to increase the technical complexity of the system because it has


to be designed to cope with all anticipated failure modes. This increases
the costs and time required to build the system.
• Automated systems are inflexible. People are adaptable and can cope with
problems and unexpected situations. This means that you do not have to
anticipate everything that could possibly go wrong when you are specifying
and designing the system.

Large systems have a long lifetime. They must evolve to meet changing
requirements. Existing systems which must be maintained are sometimes called
legacy systems. Evolution is inherently costly for a number of reasons:

• Changes must be analyzed from a technical and business perspective;


• Sub-systems interact so unanticipated problems can arise;
• There is rarely a rationale for original design decisions;
• System structure is corrupted as changes are made to it.

Factors that affect system lifetimes:

• Investment cost: The costs of a systems engineering project may be tens


or even hundreds of millions of dollars. These costs can only be justified if
the system can deliver value to an organization for many years.
• Loss of expertise: As businesses change and restructure to focus on their
core activities, they often lose engineering expertise. This may mean that
they lack the ability to specify the requirements for a new system.
• Replacement cost: The cost of replacing a large system is very high.
Replacing an existing system can only be justified if this leads to significant
cost savings over the existing system.
• Return on investment: If a fixed budget is available for systems
engineering, spending this on new systems in some other area of the
business may lead to a higher return on investment than replacing an
existing system.
• Risks of change: Systems are an inherent part of business operations and
the risks of replacing existing systems with new systems cannot be
justified. The danger with a new system is that things can go wrong in the
hardware, software and operational processes. The potential costs of these
problems for the business may be so high that they cannot take the risk of
system replacement.
• System dependencies: Other systems may depend on a system and
making changes to these other systems to accommodate a replacement
system may be impractical.

Proposed changes have to be analyzed very carefully from a business and a


technical perspective. Subsystems are never completely independent so changes
to a subsystem may have side-effects that adversely affect other subsystems.
Reasons for original design decisions are often unrecorded. Those responsible for
the system evolution have to work out why these decisions were made. As
systems age, their structure becomes corrupted by change so the costs of making
further changes increases.

Real-time Software Engineering

Computers are used to control a wide range of systems from simple domestic
machines, through games controllers, to entire manufacturing plants. Their
software must react to events generated by the hardware and, often, issue control
signals in response to these events. The software in these systems is
embedded in system hardware, often in read-only memory, and usually
responds, in real time, to events from the system's environment.

Responsiveness in real-time is the critical difference between embedded


systems and other software systems, such as information systems, web-based
systems or personal software systems. For non-real-time systems, correctness
can be defined by specifying how system inputs map to corresponding outputs
that should be produced by the system. In a real-time system, the correctness
depends both on the response to an input and the time taken to generate
that response. If the system takes too long to respond, then the required
response may be ineffective.

A real-time system is a software system where the correct functioning of the


system depends on the results produced by the system and the time at which
these results are produced. A soft real-time system is a system whose
operation is degraded if results are not produced according to the specified timing
requirements. A hard real-time system is a system whose operation is incorrect
if results are not produced according to the timing specification.

Characteristics of embedded systems:

• Embedded systems generally run continuously and do not terminate.


• Interactions with the system's environment are unpredictable.
• There may be physical limitations that affect the design of a system.
• Direct hardware interaction may be necessary.
• Issues of safety and reliability may dominate the system design.

Embedded System Design

The design process for embedded systems is a system engineering process that
has to consider, in detail, the design and performance of the system hardware.
Part of the design process may involve deciding which system capabilities are to
be implemented in software and which in hardware. Low-level decisions on
hardware, support software and system timing must be considered early in the
process. These may mean that additional software functionality, such as battery
and power management, has to be included in the system.

Real-time systems are often considered to be reactive systems. Given a


stimulus, the system must produce a
reaction or response within a specified time. Stimuli come from sensors in the
systems environment and from actuators controlled by the system.

• Periodic stimuli occur at predictable time intervals. For example, the


system may examine a sensor every 50 milliseconds and take action
(respond) depending on that sensor value (the stimulus).
• Aperiodic stimuli occur irregularly and unpredictably and are may be
signaled using the computer's interrupt mechanism. An example of such a
stimulus would be an interrupt indicating that an I/O transfer was complete
and that data was available in a buffer.

Because of the need to respond to timing demands made by different


stimuli/responses, the system architecture must allow for fast switching
between stimulus handlers. Timing demands of different stimuli are different
so a simple sequential loop is not usually adequate. Real-time systems are
therefore usually designed as cooperating processes with a real-time executive
controlling these processes.
• Sensor control processes collect information from sensors. May buffer
information collected in response to a sensor stimulus.
• Data processor carries out processing of collected information and
computes the system response.
• Actuator control processes generate control signals for the actuators.

Processes in a real-time system have to be coordinated and share


information. Process coordination mechanisms ensure mutual exclusion to
shared resources. When one process is modifying a shared resource, other
processes should not be able to change that resource. When designing the
information exchange between processes, you have to take into account the fact
that these processes may be running at different speeds.

Producer processes collect data and add it to the buffer. Consumer processes take
data from the buffer and make elements available. Producer and consumer
processes must be mutually excluded from accessing the same element. The
buffer must stop producer processes adding information to a full buffer and
consumer processes trying to take information from an empty buffer.

The effect of a stimulus in a real-time system may trigger a transition


from one state to another. State models are therefore often used to describe
embedded real-time systems. UML state diagrams may be used to show the
states and state transitions in a real-time system.

Programming languages for real-time systems development have to include


facilities to access system hardware, and it should be possible to predict the
timing of particular operations in these languages. Systems-level languages, such
as C, which allow efficient code to be generated are widely used in preference to
languages such as Java. There is a performance overhead in object-oriented
systems because extra code is required to mediate access to attributes and
handle calls to operations. The loss of performance may make it impossible to
meet real-time deadlines.

Architectural Patterns for Real-time Software


Characteristic system architectures for embedded systems:

• Observe and React pattern is used when a set of sensors are routinely
monitored and displayed.
• Environmental Control pattern is used when a system includes sensors,
which provide information about the environment and actuators that can
change the environment.
• Process Pipeline pattern is used when data has to be transformed from
one representation to another before it can be processed.

Observe and React pattern description

The input values of a set of sensors of the same types are collected and analyzed.
These values are displayed in some way. If the sensor values indicate that some
exceptional condition has arisen, then actions are initiated to draw the operator's
attention to that value and, in certain cases, to take actions in response to the
exceptional value.

Stimuli - Values from sensors attached to the system.

Responses - Outputs to display, alarm triggers, signals to reacting systems.

Processes - Observer, Analysis, Display, Alarm, Reactor.

Used in - Monitoring systems, alarm systems.

Environmental Control pattern description


The system analyzes information from a set of sensors that collect data from the
system's environment. Further information may also be collected on the state of
the actuators that are connected to the system. Based on the data from the
sensors and actuators, control signals are sent to the actuators that then cause
changes to the system's environment. Information about the sensor values and
the state of the actuators may be displayed.

Stimuli - Values from sensors attached to the system and the state of the system
actuators.

Responses - Control signals to actuators, display information.

Processes - Monitor, Control, Display, Actuator Driver, Actuator monitor.

Used in - Control systems.

Process Pipeline pattern description

A pipeline of processes is set up with data moving in sequence from one end of
the pipeline to another. The processes are often linked by synchronized buffers
to allow the producer and consumer processes to run at different speeds. The
culmination of a pipeline may be display or data storage or the pipeline may
terminate in an actuator.

Stimuli - Input values from the environment or some other process

Responses - Output values to the environment or a shared buffer


Processes - Producer, Buffer, Consumer

Used in - Data acquisition systems, multimedia systems

Timing Analysis

The correctness of a real-time system depends not just on the correctness of its
outputs but also on the time at which these outputs were produced. In a timing
analysis, you calculate how often each process in the system must be executed
to ensure that all inputs are processed and all system responses produced in a
timely way. The results of the timing analysis are used to decide how frequently
each process should execute and how these processes should be scheduled by
the real-time operating system.

Factors in timing analysis:

• Deadlines: the times by which stimuli must be processed and some


response produced by the system.
• Frequency: the number of times per second that a process must execute
so that you are confident that it can always meet its deadlines.
• Execution time: the time required to process a stimulus and produce a
response.

Real-time Operating Systems

Real-time operating systems are specialized operating systems which manage the
processes in the RTS. Responsible for process management and
resource (processor and memory) allocation. May be based on a standard kernel
which is used unchanged or modified for a particular
application. Do not normally include facilities such as file management.

Real-time operating system components:

• Real-time clock provides information for process scheduling.


• Interrupt handler manages aperiodic requests for service.
• Scheduler chooses the next process to be run.
• Resource manager allocates memory and processor resources.
• Dispatcher starts process execution.
The scheduler chooses the next process to be executed by the processor. This
depends on a scheduling strategy which may take the process priority into
account. The resource manager allocates memory and a processor for the process
to be executed. The dispatcher takes the process from ready list, loads it onto a
processor and starts execution.

Scheduling strategies:

• Non-pre-emptive scheduling: once a process has been scheduled for


execution, it runs to completion or until it is blocked for some reason (e.g.,
waiting for I/O).
• Pre-emptive scheduling: the execution of an executing processes may
be stopped if a higher priority process requires service.
• Scheduling algorithms include round-robin, rate monotonic, and shortest
deadline first.
UNIT V SOFTWARE TESTING AND SOFTWARE CONFIGURATION
MANAGEMENT

Software Testing Strategy


Software testing is the process of evaluating a software application to identify
if it meets specified requirements and to identify any defects. The following
are common testing strategies:

1. Black box testing – Tests the functionality of the software without


looking at the internal code structure.
2. White box testing – Tests the internal code structure and logic of the
software.
3. Unit testing – Tests individual units or components of the software to
ensure they are functioning as intended.
4. Integration testing – Tests the integration of different components of
the software to ensure they work together as a system.
5. Functional testing – Tests the functional requirements of the software
to ensure they are met.
6. System testing – Tests the complete software system to ensure it
meets the specified requirements.
7. Acceptance testing – Tests the software to ensure it meets the
customer’s or end-user’s expectations.
8. Regression testing – Tests the software after changes or
modifications have been made to ensure the changes have not
introduced new defects.
9. Performance testing – Tests the software to determine its
performance characteristics such as speed, scalability, and stability.
10. Security testing – Tests the software to identify vulnerabilities
and ensure it meets security requirements.

Software Testing is a type of investigation to find out if there is any default or


error present in the software so that the errors can be reduced or removed to
increase the quality of the software and to check whether it fulfills the specifies
requirements or not.

According to Glen Myers, software testing has the following objectives:


• The process of investigating and checking a program to find whether
there is an error or not and does it fulfill the requirements or not is called
testing.
• When the number of errors found during the testing is high, it indicates
that the testing was good and is a sign of good test case.
• Finding an unknown error that wasn’t discovered yet is a sign of a
successful and a good test case.
The main objective of software testing is to design the tests in such a way that
it systematically finds different types of errors without taking much time and
effort so that less time is required for the development of the software. The
overall strategy for testing software
includes:

1. Before testing starts, it’s necessary to identify and specify the


requirements of the product in a quantifiable manner. Different
characteristics quality of the software is there such as maintainability
that means the ability to update and modify, the probability that means
to find and estimate any risk, and usability that means how it can easily
be used by the customers or end-users. All these characteristic qualities
should be specified in a particular order to obtain clear test results
without any error.
2. Specifying the objectives of testing in a clear and detailed
manner. Several objectives of testing are there such as effectiveness
that means how effectively the software can achieve the target, any
failure that means inability to fulfill the requirements and perform
functions, and the cost of defects or errors that mean the cost required
to fix the error. All these objectives should be clearly mentioned in the
test plan.
3. For the software, identifying the user’s category and developing
a profile for each user. Use cases describe the interactions and
communication among different classes of users and the system to
achieve the target. So as to identify the actual requirement of the users
and then testing the actual use of the product.
4. Developing a test plan to give value and focus on rapid-cycle
testing. Rapid Cycle Testing is a type of test that improves quality by
identifying and measuring the any changes that need to be required for
improving the process of software. Therefore, a test plan is an important
and effective document that helps the tester to perform rapid cycle
testing.
5. Robust software is developed that is designed to test itself. The
software should be capable of detecting or identifying different classes
of errors. Moreover, software design should allow automated and
regression testing which tests the software to find out if there is any
adverse or side effect on the features of software due to any change in
code or program.
6. Before testing, using effective formal reviews as a filter. Formal
technical reviews is technique to identify the errors that are not
discovered yet. The effective technical reviews conducted before testing
reduces a significant amount of testing efforts and time duration
required for testing software so that the overall development time of
software is reduced.
7. Conduct formal technical reviews to evaluate the nature, quality
or ability of the test strategy and test cases. The formal technical
review helps in detecting any unfilled gap in the testing approach.
Hence, it is necessary to evaluate the ability and quality of the test
strategy and test cases by technical reviewers to improve the quality of
software.
8. For the testing process, developing a approach for the
continuous development. As a part of a statistical process control
approach, a test strategy that is already measured should be used for
software testing to measure and control the quality during the
development of software.
Advantages of software testing:
1. Improves software quality and reliability – Testing helps to identify and
fix defects early in the development process, reducing the risk of failure
or unexpected behavior in the final product.
2. Enhances user experience – Testing helps to identify usability issues and
improve the overall user experience.
3. Increases confidence – By testing the software, developers and
stakeholders can have confidence that the software meets the
requirements and works as intended.
4. Facilitates maintenance – By identifying and fixing defects early, testing
makes it easier to maintain and update the software.
5. Reduces costs – Finding and fixing defects early in the development
process is less expensive than fixing them later in the life cycle.

Disadvantages of software testing:


1. Time-consuming – Testing can take a significant amount of time,
particularly if thorough testing is performed.
2. Resource-intensive – Testing requires specialized skills and resources,
which can be expensive.
3. Limited coverage – Testing can only reveal defects that are present in
the test cases, and it is possible for defects to be missed.
4. Unpredictable results – The outcome of testing is not always predictable,
and defects can be hard to replicate and fix.
5. Delays in delivery – Testing can delay the delivery of the software if
testing takes longer than expected or if significant defects are identified.

Unit Testing

Unit testing is a type of software testing that focuses on individual units or


components of a software system. The purpose of unit testing is to validate
that each unit of the software works as intended and meets the requirements.
Unit testing is typically performed by developers, and it is performed early in
the development process before the code is integrated and tested as a whole
system.

Unit tests are automated and are run each time the code is changed to ensure
that new code does not break existing functionality. Unit tests are designed to
validate the smallest possible unit of code, such as a function or a method,
and test it in isolation from the rest of the system. This allows developers to
quickly identify and fix any issues early in the development process, improving
the overall quality of the software and reducing the time required for later
testing.
Unit Testing is a software testing technique by means of which individual
units of software i.e. group of computer program modules, usage procedures,
and operating procedures are tested to determine whether they are suitable
for use or not. It is a testing method using which every independent module
is tested to determine if there is an issue by the developer himself. It is
correlated with the functional correctness of the independent modules. Unit
Testing is defined as a type of software testing where individual components
of a software are tested. Unit Testing of the software product is carried out
during the development of an application. An individual component may be
either an individual function or a procedure. Unit Testing is typically performed
by the developer. In SDLC or V Model, Unit testing is the first level of testing
done before integration testing. Unit testing is such a type of testing technique
that is usually performed by developers. Although due to the reluctance of
developers to test, quality assurance engineers also do unit testing.

Objective of Unit Testing:


The objective of Unit Testing is:
1. To isolate a section of code.
2. To verify the correctness of the code.
3. To test every function and procedure.
4. To fix bugs early in the development cycle and to save costs.
5. To help the developers to understand the code base and enable them to
make changes quickly.
6. To help with code reuse.

Types of Unit Testing:


There are 2 types of Unit Testing: Manual, and Automated.
Workflow of Unit
Testing:

Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit
tests for input, user interface, and output parts.
2. White Box Testing: This technique is used in testing the functional
behavior of the system by giving the input and checking the functionality
output including the internal design structure and code of the modules.
3. Gray Box Testing: This technique is used in executing the relevant test
cases, test methods, test functions, and analyzing the code performance
for the modules.

The Unit Testing Techniques are mainly categorized into three parts which
are Black box testing that involves testing of user interface along with input
and output, White box testing that involves testing the functional behaviour
of the software application and Gray box testing that is used to execute test
suites, test methods, test cases and performing risk analysis.
Code coverage techniques used in Unit Testing are listed below:
• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Finite State Machine Coverage

Unit Testing Tools:


Here are some commonly used Unit Testing tools:
1. Jtest
2. Junit
3. NUnit
4. EMMA
5. PHPUnit

Advantages of Unit Testing:


1. Unit Testing allows developers to learn what functionality is provided by
a unit and how to use it to gain a basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the
module works properly.
3. Unit testing enables testing parts of the project without waiting for
others to be completed.
4. Early Detection of Issues: Unit testing allows developers to detect and
fix issues early in the development process, before they become larger
and more difficult to fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of
code works as intended and meets the requirements, improving the
overall quality of the software.
6. Increased Confidence: Unit testing provides developers with confidence
in their code, as they can validate that each unit of the software is
functioning as expected.
7. Faster Development: Unit testing enables developers to work faster and
more efficiently, as they can validate changes to the code without having
to wait for the full system to be tested.
8. Better Documentation: Unit testing provides clear and concise
documentation of the code and its behavior, making it easier for other
developers to understand and maintain the software.
9. Facilitation of Refactoring: Unit testing enables developers to safely
make changes to the code, as they can validate that their changes do
not break existing functionality.
10. Reduced Time and Cost: Unit testing can reduce the time and cost
required for later testing, as it helps to identify and fix issues early in
the development process.

Disadvantages of Unit Testing:


1. The process is time-consuming for writing the unit test cases.
2. Unit Testing will not cover all the errors in the module because there is
a chance of having errors in the modules while doing integration testing.
3. Unit Testing is not efficient for checking the errors in the UI(User
Interface) part of the module.
4. It requires more time for maintenance when the source code is changed
frequently.
5. It cannot cover the non-functional testing parameters such as
scalability, the performance of the system, etc.
6. Time and Effort: Unit testing requires a significant investment of time
and effort to create and maintain the test cases, especially for complex
systems.
7. Dependence on Developers: The success of unit testing depends on the
developers, who must write clear, concise, and comprehensive test
cases to validate the code.
8. Difficulty in Testing Complex Units: Unit testing can be challenging when
dealing with complex units, as it can be difficult to isolate and test
individual units in isolation from the rest of the system.
9. Difficulty in Testing Interactions: Unit testing may not be sufficient for
testing interactions between units, as it only focuses on individual units.
10. Difficulty in Testing User Interfaces: Unit testing may not be
suitable for testing user interfaces, as it typically focuses on the
functionality of individual units.
11. Over-reliance on Automation: Over-reliance on automated unit
tests can lead to a false sense of security, as automated tests may not
uncover all possible issues or bugs.
12. Maintenance Overhead: Unit testing requires ongoing
maintenance and updates, as the code and test cases must be kept up-
to-date with changes to the software.

Integration Testing
Integration Testing is defined as a type of testing where software modules
are integrated logically and tested as a group. A typical software project
consists of multiple software modules, coded by different programmers. The
purpose of this level of testing is to expose defects in the interaction between
these software modules when they are integrated
Integration Testing focuses on checking data communication amongst these
modules. Hence it is also termed as ‘I & T’ (Integration and Testing), ‘String
Testing’ and sometimes ‘Thread Testing’.

Although each software module is unit tested, defects still exist for various
reasons like
• A Module, in general, is designed by an individual software developer
whose understanding and programming logic may differ from other
programmers. Integration Testing becomes necessary to verify the
software modules work in unity
• At the time of module development, there are wide chances of change
in requirements by the clients. These new requirements may not be unit
tested and hence system integration Testing becomes necessary.
• Interfaces of the software modules with the database could be
erroneous
• External Hardware interfaces, if any, could be erroneous
• Inadequate exception handling could cause issues.

Example of Integration Test Case


Integration Test Case differs from other test cases in the sense it focuses
mainly on the interfaces & flow of data/information between the
modules. Here priority is to be given for the integrating links rather than
the unit functions which are already tested.
Sample Integration Test Cases for the following scenario: Application has 3
modules say ‘Login Page’, ‘Mailbox’ and ‘Delete emails’ and each of them is
integrated logically.
Here do not concentrate much on the Login Page testing as it’s already been
done in Unit Testing. But check how it’s linked to the Mail Box Page.
Similarly Mail Box: Check its integration to the Delete Mails Module.

Test
Test Case
Case Test Case Objective Expected Result
Description
ID

Check the interface link Enter login credentials


To be directed to the
1 between the Login and and click on the Login
Mail Box
Mailbox module button

Check the interface link From Mailbox select Selected email should
2 between the Mailbox and the email and click a appear in the
Delete Mails Module delete button Deleted/Trash folder

Types of Integration Testing


Software Engineering defines variety of strategies to execute Integration
testing, viz.
• Big Bang Approach :
• Incremental Approach: which is further divided into the following
• Top Down Approach
• Bottom Up Approach
• Sandwich Approach – Combination of Top Down and Bottom Up

Validation Testing
The process of evaluating software during the development process or at the
end of the development process to determine whether it satisfies specified
business requirements. Validation Testing ensures that the product actually
meets the client's needs. It can also be defined as to demonstrate that the
product fulfills its intended use when deployed on appropriate environment.
It answers to the question, Are we building the right product?

Verification and Validation is the process of investigating that a software


system satisfies specifications and standards and it fulfills the required
purpose. Barry Boehm described verification and validation as the following:

Verification: Are we building the product right?


Validation: Are we building the right product?

Verification:
Verification is the process of checking that a software achieves its goal without
any bugs. It is the process to ensure whether the product that is developed is
right or not. It verifies whether the developed product fulfills the requirements
that we have.
Verification is Static Testing.
Activities involved in verification:
1. Inspections
2. Reviews
3. Walkthroughs
4. Desk-checking

Note: Verification is followed by Validation.

Validation:
Validation is the process of checking whether the software product is up to the
mark or in other words product has high level requirements. It is the process
of checking the validation of product i.e. it checks what we are developing is
the right product. it is validation of actual and expected product.
Validation is the Dynamic Testing.
Activities involved in validation:
1. Black box testing
2. White box testing
3. Unit testing
4. Integration testing
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software/product
under test is evaluated during this type of testing.

Activities:
• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing
System Testing

System testing is a type of software testing that evaluates the overall


functionality and performance of a complete and fully integrated software
solution. It tests if the system meets the specified requirements and if it is
suitable for delivery to the end-users. This type of testing is performed after
the integration testing and before the acceptance testing.

System Testing is a type of software testing that is performed on a complete


integrated system to evaluate the compliance of the system with the
corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any
irregularity between the units that are integrated together. System testing
detects defects within both the integrated units and the whole system. The
result of system testing is the observed behavior of a component or a system
when it is tested. System Testing is carried out on the whole system in the
context of either system requirement specifications or functional requirement
specifications or in the context of both. System testing tests the design and
behavior of the system and also the expectations of the customer. It is
performed to test the system beyond the bounds mentioned in the software
requirements specification (SRS). System Testing is basically performed by a
testing team that is independent of the development team that helps to test
the quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is
performed after the integration testing and before the acceptance testing.
System Testing Process: System Testing is performed in the following
steps:
• Test Environment Setup: Create testing environment for the better
quality testing.
• Create Test Case: Generate test case for the testing process.
• Create Test Data: Generate the data that is to be tested.
• Execute Test Case: After the generation of the test case and the test
data, test cases are executed.
• Defect Reporting: Defects in the system are detected.
• Regression Testing: It is carried out to test the side effects of the
testing process.
• Log Defects: Defects are fixed in this step.
• Retest: If the test is not successful then again test is performed.

Types of System Testing:


• Performance Testing: Performance Testing is a type of software
testing that is carried out to test the speed, scalability, stability and
reliability of the software product or application.
• Load Testing: Load Testing is a type of software Testing which is
carried out to determine the behavior of a system or software product
under extreme load.
• Stress Testing: Stress Testing is a type of software testing performed
to check the robustness of the system under the varying loads.
• Scalability Testing: Scalability Testing is a type of software testing
which is carried out to check the performance of a software application
or system in terms of its capability to scale up or scale down the number
of user request load.

Advantages of System Testing :


• The testers do not require more knowledge of programming to carry out
this testing.
• It will test the entire product or software so that we will easily detect
the errors or defects which cannot be identified during the unit testing
and integration testing.
• The testing environment is similar to that of the real time production or
business environment.
• It checks the entire functionality of the system with different test scripts
and also it covers the technical and business requirements of clients.
• After this testing, the product will almost cover all the possible bugs or
errors and hence the development team will confidently go ahead with
acceptance testing.

Debugging is the process of identifying and resolving errors, or bugs, in a


software system. It is an important aspect of software engineering because bugs
can cause a software system to malfunction, and can lead to poor performance
or incorrect results. Debugging can be a time-consuming and complex task, but
it is essential for ensuring that a software system is functioning correctly.
There are several common methods and techniques used in debugging,
including:
1. Code Inspection: This involves manually reviewing the source code of
a software system to identify potential bugs or errors.
2. Debugging Tools: There are various tools available for debugging
such as debuggers, trace tools, and profilers that can be used to
identify and resolve bugs.
3. Unit Testing: This involves testing individual units or components of
a software system to identify bugs or errors.
4. Integration Testing: This involves testing the interactions between
different components of a software system to identify bugs or errors.
5. System Testing: This involves testing the entire software system to
identify bugs or errors.
6. Monitoring: This involves monitoring a software system for unusual
behavior or performance issues that can indicate the presence of bugs
or errors.
7. Logging: This involves recording events and messages related to the
software system, which can be used to identify bugs or errors.
It is important to note that debugging is an iterative process, and it may take
multiple attempts to identify and resolve all bugs in a software system.
Additionally, it is important to have a well-defined process in place for reporting
and tracking bugs, so that they can be effectively managed and resolved.
In summary, debugging is an important aspect of software engineering, it’s the
process of identifying and resolving errors, or bugs, in a software system. There
are several common methods and techniques used in debugging, including code
inspection, debugging tools, unit testing, integration testing, system testing,
monitoring, and logging. It is an iterative process that may take multiple
attempts to identify and resolve all bugs in a software system.
In the context of software engineering, debugging is the process of fixing a bug
in the software. In other words, it refers to identifying, analyzing, and removing
errors. This activity begins after the software fails to execute properly and
concludes by solving the problem and successfully testing the software. It is
considered to be an extremely complex and tedious task because errors need to
be resolved at all stages of debugging.
A better approach is to run the program within a debugger, which is a
specialized environment for controlling and monitoring the execution of a
program. The basic functionality provided by a debugger is the insertion of
breakpoints within the code. When the program is executed within the
debugger, it stops at each breakpoint. Many IDEs, such as Visual C++ and C-
Builder provide built-in debuggers.
Debugging Process: Steps involved in debugging are:
• Problem identification and report preparation.
• Assigning the report to the software engineer defect to verify that it is
genuine.
• Defect Analysis using modeling, documentation, finding and testing
candidate flaws, etc.
• Defect Resolution by making required changes to the system.
• Validation of corrections.
The debugging process will always have one of two outcomes :
1. The cause will be found and corrected.
2. The cause will not be found.
Later, the person performing debugging may suspect a cause, design a test case
to help validate that suspicion and work toward error correction in an iterative
fashion.
During debugging, we encounter errors that range from mildly annoying to
catastrophic. As the consequences of an error increase, the amount of pressure
to find the cause also increases. Often, pressure sometimes forces a software
developer to fix one error and at the same time introduce two more.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in order to
understand the system. It helps the debugger to construct different
representations of systems to be debugged depending on the need. A
study of the system is also done actively to find recent changes made
to the software.
2. Backtracking: Backward analysis of the problem which involves
tracing the program backward from the location of the failure message
in order to identify the region of faulty code. A detailed study of the
region is conducted to find the cause of defects.
3. Forward analysis of the program involves tracing the program
forwards using breakpoints or print statements at different points in
the program and studying the results. The region where the wrong
outputs are obtained is the region that needs to be focused on to find
the defect.
4. Using past experience with the software debug the software with
similar problems in nature. The success of this approach depends on
the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary partitioning.
Data related to the error occurrence are organized to isolate potential
causes.
6. Static analysis: Analyzing the code without executing it to identify
potential bugs or errors. This approach involves analyzing code syntax,
data flow, and control flow.
7. Dynamic analysis: Executing the code and analyzing its behavior at
runtime to identify errors or bugs. This approach involves techniques
like runtime debugging and profiling.
8. Collaborative debugging: Involves multiple developers working
together to debug a system. This approach is helpful in situations
where multiple modules or components are involved, and the root
cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify the
sequence of events leading up to the error. This approach involves
collecting and analyzing logs and traces generated by the system
during its execution.
10. Automated Debugging: The use of automated tools and
techniques to assist in the debugging process. These tools can include
static and dynamic analysis tools, as well as tools that use machine
learning and artificial intelligence to identify errors and suggest fixes.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other
programs. A lot of public domain software like gdb and dbx are available for
debugging. They offer console-based command-line interfaces. Examples of
automated debugging tools include code-based tracers, profilers, interpreters,
etc. Some of the widely used debuggers are:
• Radare2
• WinDbg
• Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc
whereas debugging starts after a bug has been identified in the software.
Testing is used to ensure that the program is correct and it was supposed to do
with a certain minimum success rate. Testing can be manual or automated.
There are several different types of testing unit testing, integration testing,
alpha, and beta testing, etc. Debugging requires a lot of knowledge, skills, and
expertise. It can be supported by some automated tools available but is more of
a manual process as every bug is different and requires a different technique,
unlike a pre-defined testing mechanism.

Advantages of Debugging:

Several advantages of debugging in software engineering:

1. Improved system quality: By identifying and resolving bugs, a


software system can be made more reliable and efficient, resulting in
improved overall quality.
2. Reduced system downtime: By identifying and resolving bugs, a
software system can be made more stable and less likely to experience
downtime, which can result in improved availability for users.
3. Increased user satisfaction: By identifying and resolving bugs, a
software system can be made more user-friendly and better able to
meet the needs of users, which can result in increased satisfaction.
4. Reduced development costs: By identifying and resolving bugs early
in the development process, it can save time and resources that would
otherwise be spent on fixing bugs later in the development process or
after the system has been deployed.
5. Increased security: By identifying and resolving bugs that could be
exploited by attackers, a software system can be made more secure,
reducing the risk of security breaches.
6. Facilitates change: With debugging, it becomes easy to make
changes to the software as it becomes easy to identify and fix bugs
that would have been caused by the changes.
7. Better understanding of the system: Debugging can help developers
gain a better understanding of how a software system works, and how
different components of the system interact with one another.
8. Facilitates testing: By identifying and resolving bugs, it makes it
easier to test the software and ensure that it meets the requirements
and specifications.
In summary, debugging is an important aspect of software engineering as it
helps to improve system quality, reduce system downtime, increase user
satisfaction, reduce development costs, increase security, facilitate change, a
better understanding of the system, and facilitate testing.

Disadvantages of Debugging:

While debugging is an important aspect of software engineering, there are also


some disadvantages to consider:
1. Time-consuming: Debugging can be a time-consuming process,
especially if the bug is difficult to find or reproduce. This can cause
delays in the development process and add to the overall cost of the
project.
2. Requires specialized skills: Debugging can be a complex task that
requires specialized skills and knowledge. This can be a challenge for
developers who are not familiar with the tools and techniques used in
debugging.
3. Can be difficult to reproduce: Some bugs may be difficult to
reproduce, which can make it challenging to identify and resolve them.
4. Can be difficult to diagnose: Some bugs may be caused by
interactions between different components of a software system,
which can make it challenging to identify the root cause of the problem.
5. Can be difficult to fix: Some bugs may be caused by fundamental
design flaws or architecture issues, which can be difficult or impossible
to fix without significant changes to the software system.
6. Limited insight: In some cases, debugging tools can only provide
limited insight into the problem and may not provide enough
information to identify the root cause of the problem.
7. Can be expensive: Debugging can be an expensive process,
especially if it requires additional resources such as specialized
debugging tools or additional development time.
In summary, debugging is an important aspect of software engineering but it
also has some disadvantages, it can be time-consuming, requires specialized
skills, can be difficult to reproduce, diagnose and fix, may have limited insight,
and can be expensive.
White box testing techniques analyze the internal structures the used data
structures, internal design, code structure, and the working of the software
rather than just the functionality as in black box testing. It is also called glass
box testing or clear box testing or structural testing. White Box Testing is also
known as transparent testing or open box testing.
White box testing is a software testing technique that involves testing the
internal structure and workings of a software application. The tester has access
to the source code and uses this knowledge to design test cases that can verify
the correctness of the software at the code level.
White box testing is also known as structural testing or code-based testing, and
it is used to test the software’s internal logic, flow, and structure. The tester
creates test cases to examine the code paths and logic flows to ensure they
meet the specified requirements.
Working process of white box testing:
• Input: Requirements, Functional specifications, design documents,
source code.
• Processing: Performing risk analysis to guide through the entire
process.
• Proper test planning: Designing test cases so as to cover the entire
code. Execute rinse-repeat until error-free software is reached. Also,
the results are communicated.
• Output: Preparing final report of the entire testing process.
Testing techniques:
• Statement coverage: In this technique, the aim is to traverse all
statements at least once. Hence, each line of code is tested. In the case
of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, helps in pointing out faulty code.

Statement Coverage Example

• Branch Coverage: In this technique, test cases are designed so that


each branch from all decision points is traversed at least once. In a
flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the flowchart are

covered

• Condition Coverage: In this technique, all individual conditions must


be covered as shown in the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
4. #TC1 – X = 0, Y = 55
5. #TC2 – X = 5, Y = 0
• Multiple Condition Coverage: In this technique, all the possible
combinations of the possible outcomes of conditions are tested at least
once. Let’s consider the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
4. #TC1: X = 0, Y = 0
5. #TC2: X = 0, Y = 5
6. #TC3: X = 55, Y = 0
7. #TC4: X = 55, Y = 5
• Basis Path Testing: In this technique, control flow graphs are made
from code or flowchart and then Cyclomatic complexity is calculated
which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent
path. Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
5. V(G) = P + 1, where P is the number of predicate nodes in the
flow graph
6. V(G) = E – N + 2, where E is the number of edges and N is the
total number of nodes
7. V(G) = Number of non-overlapping regions in the graph
8. #P1: 1 – 2 – 4 – 7 – 8
9. #P2: 1 – 2 – 3 – 5 – 7 – 8
10. #P3: 1 – 2 – 3 – 6 – 7 – 8
11. #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
• Loop Testing: Loops are widely used and these are fundamental to
many algorithms hence, their testing is very important. Errors often
occur at the beginnings and ends of loops.
1. Simple loops: For simple loops of size n, test cases are
designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their
minimum count and we start from the innermost loop. Simple
loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another.
Simple loop tests are applied for each. If they’re not
independent, treat them like nesting.
White Testing is Performed in 2 Steps:
1. Tester should understand the code well

2. Tester should write some code for test cases and execute them

Tools required for White box Testing:


• PyUnit
• Sqlmap
• Nmap
• Parasoft Jtest
• Nunit
• VeraUnit
• CppUnit
• Bugzilla
• Fiddler
• JSUnit.net
• OpenGrok
• Wireshark
• HP Fortify
• CSUnit

Features of white box testing:

1. Code coverage analysis: White box testing helps to analyze the code
coverage of an application, which helps to identify the areas of the code
that are not being tested.
2. Access to the source code: White box testing requires access to the
application’s source code, which makes it possible to test individual
functions, methods, and modules.
3. Knowledge of programming languages: Testers performing white
box testing must have knowledge of programming languages like Java,
C++, Python, and PHP to understand the code structure and write
tests.
4. Identifying logical errors: White box testing helps to identify logical
errors in the code, such as infinite loops or incorrect conditional
statements.
5. Integration testing: White box testing is useful for integration
testing, as it allows testers to verify that the different components of
an application are working together as expected.
6. Unit testing: White box testing is also used for unit testing, which
involves testing individual units of code to ensure that they are working
correctly.
7. Optimization of code: White box testing can help to optimize the
code by identifying any performance issues, redundant code, or other
areas that can be improved.
8. Security testing: White box testing can also be used for security
testing, as it allows testers to identify any vulnerabilities in the
application’s code.
Advantages:
1. White box testing is thorough as the entire code and structures are
tested.
2. It results in the optimization of code removing errors and helps in
removing extra lines of code.
3. It can start at an earlier stage as it doesn’t require any interface as in
the case of black box testing.
4. Easy to automate.
5. White box testing can be easily started in Software Development Life
Cycle.
6. Easy Code Optimization.
Some of the advantages of white box testing include:
• Testers can identify defects that cannot be detected through other
testing techniques.

• Testers can create more comprehensive and effective test cases that
cover all code paths.

•Testers can ensure that the code meets coding standards and is
optimized for performance.
However, there are also some disadvantages to white box testing, such as:
• Testers need to have programming knowledge and access to the
source code to perform tests.

• Testers may focus too much on the internal workings of the software
and may miss external issues.

• Testers may have a biased view of the software since they are familiar
with its internal workings.
Overall, white box testing is an important technique in software
engineering, and it is useful for identifying defects and ensuring that
software applications meet their requirements and specifications at the
code level
Disadvantages:
1. It is very expensive.
2. Redesigning code and rewriting code needs test cases to be written
again.
3. Testers are required to have in-depth knowledge of the code and
programming language as opposed to black-box testing.
4. Missing functionalities cannot be detected as the code that exists is
tested.
5. Very complex and at times not realistic.
6. Much more chances of Errors in production.

Basis Path Testing is a white-box testing technique based on the control


structure of a program or a module. Using this structure, a control flow graph is
prepared and the various possible paths present in the graph are executed as a
part of testing. Therefore, by definition, Basis path testing is a technique of
selecting the paths in the control flow graph, that provide a basis set of
execution paths through the program or module. Since this testing is based on
the control structure of the program, it requires complete knowledge of the
program’s structure. To design test cases using this technique, four steps are
followed :
1. Construct the Control Flow Graph
2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths
Let’s understand each step one by one. 1. Control Flow Graph – A control flow
graph (or simply, flow graph) is a directed graph which represents the control
structure of a program or module. A control flow graph (V, E) has V number of
nodes/vertices and E number of edges in it. A control graph can also have :
• Junction Node – a node with more than one arrow entering it.
• Decision Node – a node with more than one arrow leaving it.
• Region – area bounded by edges and nodes (area outside the graph is
also counted as a region.).
Below are the notations used
while constructing a flow graph :

• Sequential Statements –
• If – Then – Else –

• Do – While –

• While – Do –
• Switch – Case –

Cyclomatic Complexity – The cyclomatic complexity V(G) is said to be a


measure of the logical complexity of a program. It can be calculated using three
different formulae:

1. Formula based on edges and nodes:


V(G) = e - n + 2*P
1. Where, e is number of edges, n is number of vertices, P is number of
connected components. For example, consider first graph given above,
where, e = 4, n = 4 and p = 1
So,
Cyclomatic complexity V(G)
= 4 - 4 + 2 * 1
= 2
1. Formula based on Decision Nodes :
V(G) = d + P
1. where, d is number of decision nodes, P is number of connected nodes.
For example, consider first graph given above,
where, d = 1 and p = 1

So,
Cyclomatic Complexity V(G)
= 1 + 1
= 2
1. Formula based on Regions :
V(G) = number of regions in the graph
1. For example, consider first graph given above,
Cyclomatic complexity V(G)
= 1 (for Region 1) + 1 (for Region 2)
= 2
Hence, using all the three above formulae, the cyclomatic complexity obtained
remains same. All these three formulae can be used to compute and verify the
cyclomatic complexity of the flow graph.

Note –
1. For one function [e.g. Main( ) or Factorial( ) ], only one flow graph is
constructed. If in a program, there are multiple functions, then a
separate flow graph is constructed for each one of them. Also, in the
cyclomatic complexity formula, the value of ‘p’ is set depending of the
number of graphs present in total.
2. If a decision node has exactly two arrows leaving it, then it is counted
as one decision node. However, if there are more than 2 arrows leaving
a decision node, it is computed using this formula :
d = k - 1
1. Here, k is number of arrows leaving the decision node.

Independent Paths : An independent path in the control flow graph is the one
which introduces at least one new edge that has not been traversed before the
path is defined. The cyclomatic complexity gives the number of independent
paths present in a flow graph. This is because the cyclomatic complexity is used
as an upper-bound for the number of tests that should be executed in order to
make sure that all the statements in the program have been executed at least
once. Consider first graph given above here the independent paths would be 2
because number of independent paths is equal to the cyclomatic complexity. So,
the independent paths in above first given graph :
• Path 1:
A -> B
• Path 2:
C -> D

Note – Independent paths are not unique. In other words, if for a graph the
cyclomatic complexity comes out be N, then there is a possibility of obtaining
two different sets of paths which are independent in nature.

Design Test Cases : Finally, after obtaining the independent paths, test cases
can be designed where each test case represents one or more independent
paths.

Advantages : Basis Path Testing can be applicable in the following cases:


1. More Coverage – Basis path testing provides the best code coverage
as it aims to achieve maximum logic coverage instead of maximum path
coverage. This results in an overall thorough testing of the code.
2. Maintenance Testing – When a software is modified, it is still
necessary to test the changes made in the software which as a result,
requires path testing.
3. Unit Testing – When a developer writes the code, he or she tests the
structure of the program or module themselves first. This is why basis
path testing requires enough knowledge about the structure of the
code.
4. Integration Testing – When one module calls other modules, there
are high chances of Interface errors. In order to avoid the case of such
errors, path testing is performed to test all the paths on the interfaces
of the modules.
5. Testing Effort – Since the basis path testing technique takes into
account the complexity of the software (i.e., program or module) while
computing the cyclomatic complexity, therefore it is intuitive to note
that testing effort in case of basis path testing is directly proportional
to the complexity of the software or program.

Control structure testing is used to increase the coverage area by testing


various control structures present in the program. The different types of testing
performed under control structure testing are as follows-
1. Condition Testing
2. Data Flow Testing
3. Loop Testing

1. Condition Testing : Condition testing is a test cased design method, which


ensures that the logical condition and decision statements are free from errors.
The errors present in logical conditions can be incorrect boolean operators,
missing parenthesis in a booleans expression, error in relational operators,
arithmetic expressions, and so on. The common types of logical conditions that
are tested using condition testing are-

1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic


expressions and ‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT
(~) operator. For example, (~E1) where ‘E1’ is an arithmetic expression
and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions,
Boolean operator, and parenthesis. For example, (E1 & E2)|(E2 & E3)
where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote
AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like
‘AND’, OR, NOT. For example, ‘A|B’ is a Boolean expression where ‘A’
and ‘B’ denote operands and | denotes OR operator.

2. Data Flow Testing : The data flow test method chooses the test path of a
program based on the locations of the definitions and uses all the variables in
the program. The data flow test approach is depicted as follows suppose each
statement in a program is assigned a unique statement number and that theme
function cannot modify its parameters or global variables. For example, with S
as its statement number.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
If statement S is an if loop statement, them its DEF set is empty and its USE set
depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from S
to statement S’ then there is no other definition of X. A definition use (DU) chain
of variable X has the form [X, S, S’], where S and S’ denote statement numbers,
X is in DEF(S) and USE(S’), and the definition of X in statement S is line at
statement S’. A simple data flow test approach requires that each DU chain be
covered at least once. This approach is known as the DU test approach. The DU
testing does not ensure coverage of all branches of a program. However, a
branch is not guaranteed to be covered by DU testing only in rare cases such as
then in which the other construct does not have any certainty of any variable in
its later part and the other part is not present. Data flow testing strategies are
appropriate for choosing test paths of a program containing nested if and loop
statements.

3. Loop Testing : Loop testing is actually a white box testing technique. It


specifically focuses on the validity of loop construction. Following are the types
of loops.
1. Simple Loop – The following set of test can be applied to simple
loops, where the maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other,
contact loops can be tested using the approach used in simple loops.
if the loops are interdependent, the steps are followed in nested loops.

3. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
4. Unstructured loops – This type of loops should be redesigned,
whenever possible, to reflect the use of unstructured the structured
programming constructs.
Black box testing is a type of software testing in which the functionality of the
software is not known. The testing is done without the internal knowledge of
the products.
Black box testing can be done in the following ways:

1. Syntax-Driven Testing – This type of testing is applied to systems that can


be syntactically represented by some language. For example- compilers,
language that can be represented by context-free grammar. In this, the test
cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs work
similarly so instead of giving all of them separately we can group them and test
only one input of each group. The idea is to partition the input domain of the
system into several equivalence classes such that each member of the class
works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error.
The technique involves two steps:

1. Identification of equivalence class – Partition any input domain into a


minimum of two sets: valid values and invalid values. For example, if
the valid range is 0 to 100 then select one valid input like 49 and one
invalid like 104.
2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering all
valid and invalid test cases considering that no two invalid inputs mask
each other. To calculate the square root of a number, the equivalence
classes will be (a) Valid inputs:
• The whole number which is a perfect square- output will be
an integer.
• The whole number which is not a perfect square- output will
be a decimal number.
• Positive decimals
• Negative numbers(integer or decimal).
• Characters other than numbers like “a”,”!”,”;”, etc.

3. Boundary value analysis – Boundaries are very good places for errors to
occur. Hence if test cases are designed for boundary values of the input domain
then the efficiency of testing improves and the probability of finding errors also
increases. For example – If the valid range is 10 to 100 then test for 10,100 also
apart from valid and invalid inputs.

4. Cause effect Graphing – This technique establishes a relationship between


logical input called causes with corresponding actions called the effect. The
causes and effects are represented using Boolean graphs. The following steps
are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop a cause-effect graph.
3. Transform the graph into a decision table.
4. Convert decision table rules to test cases.

For example, in the following cause-effect graph:


It can be converted into a decision table like:

Each column corresponds to a rule which will become a test case for testing. So
there will be 4 test cases.

5. Requirement-based testing – It includes validating the requirements given in


the SRS of a software system.
6. Compatibility testing – The test case result not only depends on the product
but is also on the infrastructure for delivering functionality. When the
infrastructure parameters are changed, it is still expected to work properly.
Some parameters that generally affect the compatibility of software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

Black Box Testing Type


The following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)

Functional Testing: It determines the system’s software functional


requirements.

Regression Testing: It ensures that the newly added code is compatible with
the existing code. In other words, a new software update has no impact on the
functionality of the software. This is carried out after a system maintenance
operation and upgrades.

Nonfunctional Testing: Nonfunctional testing is also known as NFT. This testing


is not functional testing of software. It focuses on the software’s performance,
usability, and scalability.

Tools Used for Black Box Testing:


1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.

What can be identified by Black Box Testing

1. Discovers missing functions, incorrect function & interface errors


2. Discover the errors faced in accessing the database
3. Discovers the errors that occur while initiating & terminating any
functions.
4. Discovers the errors in performance or behavoiur of software.

Features of black box testing:

1. Independent testing: Black box testing is performed by testers who


are not involved in the development of the application, which helps to
ensure that testing is unbiased and impartial.
2. Testing from a user’s perspective: Black box testing is conducted
from the perspective of an end user, which helps to ensure that the
application meets user requirements and is easy to use.
3. No knowledge of internal code: Testers performing black box testing
do not have access to the application’s internal code, which allows
them to focus on testing the application’s external behavior and
functionality.
4. Requirements-based testing: Black box testing is typically based on
the application’s requirements, which helps to ensure that the
application meets the required specifications.
5. Different testing techniques: Black box testing can be performed
using various testing techniques, such as functional testing, usability
testing, acceptance testing, and regression testing.
6. Easy to automate: Black box testing is easy to automate using
various automation tools, which helps to reduce the overall testing
time and effort.
7. Scalability: Black box testing can be scaled up or down depending on
the size and complexity of the application being tested.
8. Limited knowledge of application: Testers performing black box
testing have limited knowledge of the application being tested, which
helps to ensure that testing is more representative of how the end
users will interact with the application.
Advantages of Black Box Testing:
• The tester does not need to have more functional knowledge or
programming skills to implement the Black Box Testing.
• It is efficient for implementing the tests in the larger system.
• Tests are executed from the user’s or client’s point of view.
• Test cases are easily reproducible.
• It is used in finding the ambiguity and contradictions in the functional
specifications.
Disadvantages of Black Box Testing:
• There is a possibility of repeating the same tests while implementing
the testing process.
• Without clear functional specifications, test cases are difficult to
implement.
• It is difficult to execute the test cases because of complex inputs at
different stages of testing.
• Sometimes, the reason for the test failure cannot be detected.
• Some programs in the application are not tested.
• It does not reveal the errors in the control structure.
• Working with a large sample space of inputs can be exhaustive and
consumes a lot of time.

Software Configuration Management (SCM)


Whenever a software is build, there is always scope for improvement and those
improvements brings changes in picture. Changes may be required to modify or
update any existing solution or to create a new solution for a problem.
Requirements keeps on changing on daily basis and so we need to keep on
upgrading our systems based on the current requirements and needs to meet
desired outputs. Changes should be analyzed before they are made to the
existing system, recorded before they are implemented, reported to have details
of before and after, and controlled in a manner that will improve quality and
reduce error. This is where the need of System Configuration Management
comes.
System Configuration Management (SCM) is an arrangement of exercises
which controls change by recognizing the items for change, setting up
connections between those things, making/characterizing instruments for
overseeing diverse variants, controlling the changes being executed in the
current framework, inspecting and revealing/reporting on the changes made. It
is essential to control the changes in light of the fact that if the changes are not
checked legitimately then they may wind up undermining a well-run
programming. In this way, SCM is a fundamental piece of all project
management activities. Processes involved in SCM – Configuration
management provides a disciplined environment for smooth control of work
products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration
items from products that compose baselines at given points in time (a
baseline is a set of mutually consistent Configuration Items, which
has been formally reviewed and agreed upon, and serves as the basis
of further development). Establishing relationship among items,
creating a mechanism to manage multiple level of control and
procedure for change management system.
2. Version control – Creating versions/specifications of the existing
product to build new products from the help of SCM system. A
description of version is given
below:

Suppose after some changes, the version of configuration object


changes from 1.0 to 1.1. Minor corrections and changes result in
versions 1.1.1 and 1.1.2, which is followed by a major update that is
object 1.2. The development of object 1.0 continues through 1.3 and
1.4, but finally, a noteworthy change to the object results in a new
evolutionary path, version 2.0. Both versions are currently supported.
3. Change control – Controlling changes to Configuration items (CI).
The change control process is explained in Figure
below:

A change request (CR) is submitted and evaluated to assess


technical merit, potential side effects, overall impact on other
configuration objects and system functions, and the projected cost of
the change. The results of the evaluation are presented as a change
report, which is used by a change control board (CCB) —a person or
group who makes a final decision on the status and priority of the
change. An engineering change Request (ECR) is generated for each
approved change. Also CCB notifies the developer in case the change
is rejected with proper reason. The ECR describes the change to be
made, the constraints that must be respected, and the criteria for
review and audit. The object to be changed is “checked out” of the
project database, the change is made, and then the object is tested
again. The object is then “checked in” to the database and appropriate
version control mechanisms are used to create the next version of the
software.
4. Configuration auditing – A software configuration audit
complements the formal technical review of the process and product.
It focuses on the technical correctness of the configuration object that
has been modified. The audit confirms the completeness, correctness
and consistency of items in the SCM system and track action items
from the audit to closure.
5. Reporting – Providing accurate status and current configuration data
to developers, tester, end users, customers and stakeholders through
admin guides, user guides, FAQs, Release notes, Memos, Installation
Guide, Configuration guide etc .
System Configuration Management (SCM) is a software engineering practice
that focuses on managing the configuration of software systems and ensuring
that software components are properly controlled, tracked, and stored. It is a
critical aspect of software development, as it helps to ensure that changes
made to a software system are properly coordinated and that the system is
always in a known and stable state.

SCM involves a set of processes and tools that help to manage the different
components of a software system, including source code, documentation, and
other assets. It enables teams to track changes made to the software system,
identify when and why changes were made, and manage the integration of
these changes into the final product.

The key objectives of SCM are to:

1. Control the evolution of software systems: SCM helps to ensure that


changes to a software system are properly planned, tested, and
integrated into the final product.
2. Enable collaboration and coordination: SCM helps teams to
collaborate and coordinate their work, ensuring that changes are
properly integrated and that everyone is working from the same
version of the software system.
3. Provide version control: SCM provides version control for software
systems, enabling teams to manage and track different versions of
the system and to revert to earlier versions if necessary.
4. Facilitate replication and distribution: SCM helps to ensure that
software systems can be easily replicated and distributed to other
environments, such as test, production, and customer sites.
5. SCM is a critical component of software development, and effective
SCM practices can help to improve the quality and reliability of
software systems, as well as increase efficiency and reduce the risk of
errors.

The main advantages of SCM are:

1. Improved productivity and efficiency by reducing the time and effort


required to manage software changes.
2. Reduced risk of errors and defects by ensuring that all changes are
properly tested and validated.
3. Increased collaboration and communication among team members by
providing a central repository for software artifacts.
4. Improved quality and stability of software systems by ensuring that
all changes are properly controlled and managed.

The main disadvantages of SCM are:

1. Increased complexity and overhead, particularly in large software


systems.
2. Difficulty in managing dependencies and ensuring that all changes
are properly integrated.
3. Potential for conflicts and delays, particularly in large development
teams with multiple contributors.

SCM repository
In computer software engineering, software configuration management (SCM)
is any kind of practice that tracks and provides control over changes to source
code. Software developers sometimes use revision control software to maintain
documentation and configuration files as well as source code. Revision control
may also track changes to configuration files.

As teams design, develop and deploy software, it is common for multiple


versions of the same software to be deployed in different sites and for the
software's developers to be working simultaneously on updates. Bugs or
features of the software are often only present in certain versions (because of
the fixing of some problems and the introduction of others as the program
develops). Therefore, for the purposes of locating and fixing bugs, it is vitally
important to be able to retrieve and run different versions of the software to
determine in which version(s) the problem occurs. It may also be necessary to
develop two versions of the software concurrently (for instance, where one
version has bugs fixed, but no new features (branch), while the other version is
where new features are worked on (trunk).

At the simplest level, developers could simply retain multiple copies of the
different versions of the program, and label them appropriately. This simple
approach has been used in many large software projects. While this method can
work, it is inefficient as many near-identical copies of the program have to be
maintained. This requires a lot of self-discipline on the part of developers and
often leads to mistakes. Since the code base is the same, it also requires granting
read-write-execute permission to a set of developers, and this adds the pressure
of someone managing permissions so that the code base is not compromised,
which adds more complexity. Consequently, systems to automate some or all of
the revision control process have been developed. This ensures that the majority
of management of version control steps is hidden behind the scenes.

Moreover, in software development, legal and business practice and other


environments, it has become increasingly common for a single document or
snippet of code to be edited by a team, the members of which may be
geographically dispersed and may pursue different and even contrary interests.
Sophisticated revision control that tracks and accounts for ownership of changes
to documents and code may be extremely helpful or even indispensable in such
situations.
SCM Process
It uses the tools which keep that the necessary change has been implemented
adequately to the appropriate component. The SCM process defines a number
of tasks:
o Identification of objects in the software configuration
o Version Control
o Change Control
o Configuration Audit
o Status Reporting
Identification
Basic Object: Unit of Text created by a software engineer during analysis,
design, code, or test.
Aggregate Object: A collection of essential objects and other aggregate
objects. Design Specification is an aggregate object.
Each object has a set of distinct characteristics that identify it uniquely: a name,
a description, a list of resources, and a "realization."
The interrelationships between configuration objects can be described with
a Module Interconnection Language (MIL).
Version Control
Version Control combines procedures and tools to handle different version of
configuration objects that are generated during the software process.
Clemm defines version control in the context of SCM: Configuration
management allows a user to specify the alternative configuration of the
software system through the selection of appropriate versions. This is supported
by associating attributes with each software version, and then allowing a
configuration to be specified [and constructed] by describing the set of desired
attributes.
Change Control
James Bach describes change control in the context of SCM is: Change Control
is Vital. But the forces that make it essential also make it annoying.
We worry about change because a small confusion in the code can create a big
failure in the product. But it can also fix a significant failure or enable incredible
new capabilities.
We worry about change because a single rogue developer could sink the project,
yet brilliant ideas originate in the mind of those rogues, and
A burdensome change control process could effectively discourage them from
doing creative work.
A change request is submitted and calculated to assess technical merit;
potential side effects, the overall impact on other configuration objects and
system functions, and projected cost of the change.
The results of the evaluations are presented as a change report, which is used
by a change control authority (CCA) - a person or a group who makes a final
decision on the status and priority of the change.
The "check-in" and "check-out" process implements two necessary elements of
change control-access control and synchronization control.
Access Control governs which software engineers have the authority to access
and modify a particular configuration object.
Synchronization Control helps to ensure that parallel changes, performed by
two different people, don't overwrite one another.
Configuration Audit
SCM audits to verify that the software product satisfies the baselines
requirements and ensures that what is built and what is delivered.
SCM audits also ensure that traceability is maintained between all CIs and that
all work requests are associated with one or more CI modification.
SCM audits are the "watchdogs" that ensures that the integrity of the project's
scope is preserved.
Status Reporting
Configuration Status reporting (sometimes also called status accounting)
providing accurate status and current configuration data to developers, testers,
end users, customers and stakeholders through admin guides, user guides,
FAQs, Release Notes, Installation Guide, Configuration Guide, etc.
Configuration Management for Web and Mobile Apps
A configuration management system is a software component responsible for
managing application configurations. These configurations are used by
applications in the ecosystem to carry out their responsibilities. Simply put,
the raison d’etre for any configuration management system is to provide
capability to store and retrieve configurations to other services.
At a high level, the following diagram represents a configuration management
system.
Multiple applications in the software ecosystem communicate with the
configuration system to retrieve configurations. The mode of communication
varies from system to system. Some systems may expose RESTful APIs, other
may use specific protocols for a full duplex communication, while a few even
use messaging system to broadcast configuration change events. It is fairly
common to find configuration management systems developed in house in
organizations. So, before you go hunting for an open-source solution, do find out
if your organization has one in place already.
Configuration systems should be singular across environments.
All environments (such as DEV, QA, Stage, UAT, Prod) should use the same
deployment of configuration management system to store their configurations.
Teams should stay away from having environment specific deployments of
configuration management systems, to avoid overhead of configuration
promotions from one environment to another. A singular deployment also brings
down the infrastructure cost.
The goals of SCM are generally:
• Configuration identification – Identifying configurations, configuration
items and baselines.
• Configuration control – Implementing a controlled change process. This
is usually achieved by setting up a change control board whose primary
function is to approve or reject all change requests that are sent against
any baseline.
• Configuration status accounting – Recording and reporting all the
necessary information on the status of the development process.
• Configuration auditing – Ensuring that configurations contain all their
intended parts and are sound with respect to their specifying documents,
including requirements, architectural specifications and user manuals.
• Build management – Managing the process and tools used for builds.
• Process management – Ensuring adherence to the organization’s
development process.
• Environment management – Managing the software and hardware that
host the system.
• Teamwork – Facilitate team interactions related to the process.
• Defect tracking – Making sure every defect has ‘traceability’ back to the
source.

You might also like