Advanced Software Engg Notes 1-5
Advanced Software Engg Notes 1-5
Advanced Software Engg Notes 1-5
COURSE OBJECTIVES:
To understand the rationale for software development process models
To understand why the architectural design of software is important;
To understand the five important dimensions of dependability, namely, availability, reliability,
safety, security, and resilience.
To understand the basic notions of a web service, web service standards, and service-oriented
architecture;
To understand the different stages of testing from testing during development of a software
system
1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment
The name 'prescriptive' is given because the model prescribes a set of activities,
actions, tasks, quality assurance and change the mechanism for every project.
• The incremental model combines the elements of the waterfall model and
they are applied in an iterative fashion.
• The first increment in this model is generally a core product.
• Each increment builds the product and submits it to the customer for any
suggested modifications.
• The next increment implements the customer's suggestions and adds
additional requirements to the previous increment.
• This process is repeated until the product is finished.
For example, the word-processing software is developed using the
incremental model.
3. RAD model
1. Business Modeling
• Business modeling consist of the flow of information between various
functions in the project.
• For example, what type of information is produced by every function and
which are the functions to handle that information.
• A complete business analysis should be performed to get the essential
business information.
2. Data modeling
• The information in the business modeling phase is refined into the set of
objects and it is essential for the business.
• The attributes of each object are identified and define the relationship
between objects.
3. Process modeling
• The data objects defined in the data modeling phase are changed to fulfil
the information flow to implement the business model.
• The process description is created for adding, modifying, deleting or
retrieving a data object.
4. Application generation
• In the application generation phase, the actual system is built.
• To construct the software the automated tools are used.
5. Testing and turnover
• The prototypes are independently tested after each iteration so that the
overall testing time is reduced.
• The data flow and the interfaces between all the components are fully
tested. Hence, most of the programming components are already tested.
• The aim of agile process is to deliver the working model of software quickly
to the customer for example: Extreme programming is the best known of
agile process.
Agile Principles
Scrum
One of the widely known and followed agile types, Scrum process usually is
followed in a small team. It majorly focuses on managing the process in a
development team. It uses a scrum board and divides the whole development
process into sprints. While the duration of the sprint may vary between 2 weeks
to 1 month, each of them is defined by analysis, development as well as user
acceptance.
Scrum also focuses on team interaction as well as effective client involvement.
There are different roles that people play in a scrum team, such as −
Scrum Master − The Scrum master’s responsibility is to set up the meeting,
team formation and overall managing of the whole sprint wise development. He/
she also takes care of any kind of hurdle faced during the development process.
Product owner − The product owner is responsible for creating and maintaining
the backlog, which is a repository or dashboard containing all the developmental
plans or requirement for the particular sprint. The product owner also assigns
and manages the requirements based on their priority order in each sprint.
Scrum team − The development team responsible for development or
completion of the tasks and successful execution of each of the sprints.
For each sprint, the product backlog is prepared by the product owner which
comprises of requirements or tasks and user stories for that particular sprint.
Once the tasks are selected for that sprint backlog, the development team works
on those tasks and executes them as a product sprint wise. The scrum master
manages the whole sprint process and takes care of the smooth execution of
plan.
Everyday there is a small meeting of around 15 mins, called the daily Scrum or
stand-up meeting, where the scrum master gets the status update regarding the
product backlog and tries to find out if there is any blockage for further action.
Some non-functional testing types include load testing, compatibility testing,
usability testing, scalability testing and so on.
Advantages:
1. Encourages teamwork and collaboration.
2. Provides a flexible and adaptive framework for planning and managing
software development projects.
3. Helps to identify and fix problems early on by using frequent testing and
inspection.
Disadvantages:
1. A lack of understanding of Scrum methodologies can lead to confusion and
inefficiency.
2. It can be difficult to estimate the overall time and cost of a project, as the
process is iterative and changes are made throughout the development.
XP
Kanban
DevOps
The DevOps is the combination of two words, one is Development and other
is Operations. It is a culture to promote the development and operation process
collectively.
What is DevOps?
The DevOps is a combination of two words, one is software Development, and
second is Operations. This allows a single team to handle the entire application
lifecycle, from development to testing, deployment, and operations. DevOps
helps you to reduce the disconnection between software developers, quality
assurance (QA) engineers, and system administrators.
• Promotes collaboration between Development and Operations team to
deploy code to production faster in an automated & repeatable way.
• Helps to increase organization speed to deliver applications and services.
It also allows organizations to serve their customers better and compete
more strongly in the market.
• Defined as a sequence of development and IT operations with better
communication and collaboration.
• One of the most valuable business disciplines for enterprises or
organizations. With the help of DevOps, quality, and speed of the
application delivery has improved to a great extent.
• A practice or methodology of making "Developers" and "Operations"
folks work together. DevOps represents a change in the IT culture with a
complete focus on rapid IT service delivery through the adoption of agile
practices in the context of a system-oriented approach.
DevOps is all about the integration of the operations and development process.
Organizations that have adopted DevOps noticed a 22% improvement in software
quality and a 17% improvement in application deployment frequency and achieve
a 22% hike in customer satisfaction. 19% of revenue hikes as a result of the
successful DevOps implementation.
Why DevOps?
o The operation and development team worked in complete isolation.
o After the design-build, the testing and deployment are performed
respectively. That's why they consumed more time than actual build cycles.
o Without the use of DevOps, the team members are spending a large
amount of time on designing, testing, and deploying instead of building the
project.
o Manual code deployment leads to human errors in production.
o Coding and operation teams have their separate timelines and are not in
synch, causing further delays.
Prototype Construction
Prototype Evaluation
A prototype is the first step on your way to a successful digital product. You’ve
already spent a lot of time and money on your idea, so you shouldn’t risk
developing a product that doesn’t match your users’ needs or preferences, or
that your users won’t understand. That’s why you should invest in usability
testing and UX testing.
In a Prototype Evaluation, you get the chance to ask your target users for initial
feedback on design, usability, and user experience. In this phase, the prototype
is tested to see if it meets the requirements and specifications. This is done by
evaluating its functionality, performance, and reliability.
During the user testing process, the testers will check your prototype and make
sure you don’t work further on a product with usability problems or that’s not
suited to your users’ needs. Based on the feedback of your target users you can
then adapt your first draft and move on with development.
Advantages of Prototype Evaluation
• Make sure your future product is suited to your users’ needs and
expectations
• Prevent operational blindness by involving your target group in
development
• Get feedback on design, usability, and UX at an early stage
• Only invest time and money in features your users will use
Prototype Evolution
The prototype developed is then presented to the customer and the other
important stakeholders in the project. The feedback is collected in an organized
manner and used for further enhancements in the product under development.
The feedback and the review comments are discussed during this stage and some
negotiations happen with the customer based on factors like – time and budget
constraints and technical feasibility of the actual implementation. The changes
accepted are again incorporated in the new Prototype developed and the cycle
repeats until the customer expectations are met.
Based on the results of the testing and evaluation phase, the prototype is refined
and improvements are made. After the refinement phase, the final prototype is
created. This prototype is then tested and evaluated to ensure that it meets all
the requirements before it is mass-produced.
Evolutionary Prototype
In an evolutionary prototype, a system or product is built via several iterations,
with each iteration building on the one before it to enhance and improve the
design. The objective is to provide a final design that satisfies the expectations
and requirements of the intended audience.
This method can be used for a wide variety of projects in various industries, but
it is frequently utilized in software development and engineering.
Modeling
Modeling is a central part of all activities that lead up to the deployment of good
software. It is required to build quality software. Modeling is widely used in
science and engineering to provide abstractions of a system at some level of
precision and detail. The model is then analyzed in order to obtain a better
understanding of the system being developed. “Modeling” is the designing of
software applications before coding.”
In model-based software design and development, software modeling is used as
an essential part of the software development process. Models are built and
analyzed prior to the implementation of the system and are used to direct the
subsequent implementation.
A better understanding of a system can be obtained by considering it from
different perspectives, such as requirements models, static models, and dynamic
models of the software system. A graphical modeling language such as UML helps
in developing, understanding, and communicating the different views.
Importance of Modeling:
• Modeling gives a graphical representation of the system to be built.
• Modeling contributes to a successful software organization.
• Modeling is a proven and well-accepted engineering technique.
• Modeling is not just a part of the building industry. It would be
inconceivable to deploy a new aircraft or an automobile without first
building models-from computer models to physical wind tunnel models to
full scale prototypes.
• A model is a simplification of reality. A model provides the blueprint of a
system.
• A model may be structural, emphasizing the organization of the system,
or it may be behavioral, emphasizing the dynamics of the system.
• Models are built for a better understanding of the system that we are
developing:
a. Models help us to visualize a system as it is or as we want it to be.
b. Models permit us to specify the structure or behavior of a system.
c. Models give us a template that guides us in constructing a system.
d. Models support the decisions we have made.
Principles
There are several basic principles of a good software engineering approach that
are commonly followed by software developers and engineers to produce high-
quality software. Some of these principles include:
1. Modularity: Breaking down the software into smaller, independent, and
reusable components or modules. This makes the software easier to
understand, test, and maintain.
2. Abstraction: Hiding the implementation details of a module or component
and exposing only the necessary information. This makes the software
more flexible and easier to change.
3. Encapsulation: Wrapping the data and functions of a module or
component into a single unit, and providing controlled access to that unit.
This helps to protect the data and functions from unauthorized access and
modification.
4. DRY principle (Don’t Repeat Yourself): Avoiding duplication of code
and data in the software. This makes the software more maintainable and
less error-prone.
5. KISS principle (Keep It Simple, Stupid): Keeping the software design
and implementation as simple as possible. This makes the software more
understandable, testable, and maintainable.
6. YAGNI (You Ain’t Gonna Need It): Avoiding adding unnecessary
features or functionality to the software. This helps to keep the software
focused on the essential requirements and makes it more maintainable.
7. SOLID principles: A set of principles that guide the design of software to
make it more maintainable, reusable, and extensible. This includes the
Single Responsibility Principle, Open/Closed Principle, Liskov Substitution
Principle, Interface Segregation Principle, and Dependency Inversion
Principle.
8. Test-driven development: Writing automated tests before writing the
code, and ensuring that the code passes all tests before it is considered
complete. This helps to ensure that the software meets the requirements
and specifications.
9. By following these principles, software engineers can develop software that
is more reliable, maintainable, and extensible.
10. It’s also important to note that these principles are not mutually
exclusive, and often work together to improve the overall quality of the
software.
Requirement Engineering
Requirements engineering (RE) refers to the process of defining, documenting,
and maintaining requirements in the engineering design process. Requirement
engineering provides the appropriate mechanism to understand what the
customer desires, analyzing the need, and assessing feasibility, negotiating a
reasonable solution, specifying the solution clearly, validating the specifications
and managing the requirements as they are transformed into a working system.
Thus, requirement engineering is the disciplined application of proven principles,
methods, tools, and notation to describe a proposed system's intended behavior
and its associated constraints.
Scenario-based Modelling
Requirements for a computer-based system can be seen in many different ways.
Some software people argue that it’s worth using a number of different modes of
representation while others believe that it’s best to select one mode of
representation. The specific elements of the requirements model are
dedicated to the analysis modeling method that is to be used.
Scenario-Based Elements
Using a scenario-based approach, system is described from user’s point of
view. For example, basic use cases and their corresponding use-case diagrams
evolve into more elaborate template-based use cases. Figure 1(a) depicts a UML
activity diagram for eliciting requirements and representing them using use
cases. There are three levels of elaboration.
Class-Based Elements
A collection of things that have similar attributes and common behaviors i.e.,
objects are categorized into classes. For example, a UML case diagram can be
used to depict a Sensor class for the SafeHome security function. Note that
diagram lists attributes of sensors and operations that can be applied to modify
these attributes. In addition to class diagrams, other analysis modeling elements
depict manner in which classes collaborate with one another and relationships
and interactions between classes.
Class-based Modelling
Class-based modeling identifies classes, attributes and relationships that the
system will use. In the airline application example, the traveler/user and the
boarding pass represent classes. The traveler's first and last name, and travel
document type represent attributes, characteristics that describe the traveler
class. The relationship between traveler and boarding pass classes is that the
traveler must enter these details into the application in order to get the boarding
pass, and that the boarding pass contains this information along with other
details like the flight departure gate, seat number etc.
Class based modeling represents the object. The system manipulates the
operations.
The elements of the class-based model consist of classes and object, attributes,
operations, class – responsibility - collaborator (CRS) models.
Classes
Classes are determined using underlining each noun or noun clause and enter it
into the simple table.
Classes are found in following forms:
External entities: The system, people or the device generates the information
that is used by the computer-based system.
Things: The reports, displays, letter, signal are the part of the information
domain or the problem.
Occurrences or events: A property transfer or the completion of a series or
robot movements occurs in the context of the system operation.
Roles: The people like manager, engineer, salesperson are interacting with the
system.
Organizational units: The division, group, team are suitable for an application.
Places: The manufacturing floor or loading dock from the context of the problem
and the overall function of the system.
Structures: The sensors, computers are defined a class of objects or related
classes of objects.
Attributes
Attributes are the set of data objects that are defining a complete class within
the context of the problem.
For example, 'employee' is a class and it consists of name, Id, department,
designation and salary of the employee are the attributes.
Operations
The operations define the behavior of an object.
The operations are characterized into following types:
• The operations manipulate the data like adding, modifying, deleting and
displaying etc.
• The operations perform a computation.
• The operation monitors the objects for the occurrence of controlling an
event
CRS Modeling
• The CRS stands for Class-Responsibility-Collaborator.
• It provides a simple method for identifying and organizing the classes that
are applicable to the system or product requirement.
• Class is an object-oriented class name. It consists of information about sub
classes and super class
• Responsibilities are the attributes and operations that are related to the
class.
• Collaborations are identified and determined when a class can achieve each
responsibility of it. If the class cannot identify itself, then it needs to
interact with another class.
Functional Modelling
Functional Modelling gives the process perspective of the object-oriented analysis
model and an overview of what the system is supposed to do. It defines the
function of the internal processes in the system with the aid of Data Flow
Diagrams (DFDs). It depicts the functional derivation of the data values without
indicating how they are derived when they are computed, or why they need to
be computed.
Data Flow Diagrams
Functional Modelling is represented through a hierarchy of DFDs. The DFD is a
graphical representation of a system that shows the inputs to the system, the
processing upon the inputs, the outputs of the system as well as the internal data
stores. DFDs illustrate the series of transformations or computations performed
on the objects or the system, and the external controls and objects that affect
the transformation.
Rumbaugh et al. have defined DFD as, “A data flow diagram is a graph which
shows the flow of data values from their sources in objects through processes
that transform them to their destinations on other objects.”
The four main parts of a DFD are −
• Processes - Processes are the computational activities that transform data
values. A whole system can be visualized as a high-level process. A process
may be further divided into smaller components. The lowest-level process
may be a simple function.
• Data Flows - Data flow represents the flow of data between two processes.
It could be between an actor and a process, or between a data store and
a process. A data flow denotes the value of a data item at some point of
the computation. This value is not changed by the data flow.
• Actors - Actors are the active objects that interact with the system by either
producing data and inputting them to the system, or consuming data
produced by the system. In other words, actors serve as the sources and
the sinks of data.
• Data Stores - Data stores are the passive objects that act as a repository
of data. Unlike actors, they cannot perform any operations. They are used
to store data and retrieve the stored data. They represent a data structure,
a disk file, or a table in a database.
The other parts of a DFD are −
• Constraints - Constraints specify the conditions or restrictions that need to
be satisfied over time. They allow adding new rules or modifying existing
ones.
• Control Flows – A process may be associated with a certain Boolean value
and is evaluated only if the value is true, though it is not a direct input to
the process. These Boolean values are called the control flows.
Behavioural Modelling
Behavioural Modelling indicates how software will respond to external events or
stimuli. In behavioral model, the behavior of the system is represented as a
function of specific events and time.
To create behavioral model following things can be considered:
• Evaluation of all use-cases to fully understand the sequence of interaction
within the system.
• Identification of events that drive the interaction sequence and understand
how these events relate to specific classes.
• Creating sequence for each use case.
• Building state diagram for the system.
• Reviewing the behavioral model to verify accuracy and consistency.
It describes interactions between objects. It shows how individual objects
collaborate to achieve the behavior of the system as a whole. In UML behavior of
a system is shown with the help of use case diagram, sequence diagram and
activity diagram
• A use case focuses on the functionality of a system i.e., what a system
does for users. It shows interaction between the system and outside
actors. Ex: Student, librarians are actors, issue book use case.
• A sequence diagram shows the objects that interact and the time sequence
of their interactions. Ex: Student, librarians are objects. Time sequence
enquires for book check availability –with respect time.
• An activity diagram specifies important processing steps. It shows
operations required for processing steps. It shows operations required for
processing. Ex issue book, check availability does not show objects
UNIT II SOFTWARE DESIGN
Design Concepts
The software design phase is the first step in the SDLC (Software Development
Life Cycle) that shifts the focus from the problem domain to the solution domain.
In software design, the system is viewed as a collection of components or
modules with clearly defined behaviors and bounds.
Design Model
A design model in software engineering is an object-based picture or pictures that
represent the use cases for a system. Or to put it another way, it's the means to
describe a system's implementation and source code in a diagrammatic
fashion. Software modeling should address the entire software design including
interfaces, interactions with other software, and all the software methods.
Software models are ways of expressing a software design. Usually, some sort of
abstract language or pictures are used to express the software design. For object-
oriented software, an object modeling language such as UML is used to develop
and express the software design. There are several tools that you can use to
develop your UML design.
In almost all cases a modeling language is used to develop the design not just to
capture the design after it is complete. This allows the designer to try different
designs and decide which will be best for the final solution. Think of designing
your software as you would a house. You start by drawing a rough sketch of the
floor plan and layout of the rooms and floors. The drawing is your modeling
language and the resulting blueprint will be a model of your final design. You will
continue to modify your drawings until you arrive at a design that meets all your
requirements. Only then should you start cutting boards or writing code.
Again, the benefit of designing your software using a modeling language is that
you discover problems early and fix them without refactoring your code.
The design model builds on the analysis model by describing, in greater detail,
the structure of the system and how the system will be implemented. Classes
that were identified in the analysis model are refined to include the
implementation constructs.
The design model is based on the analysis and architectural requirements of the
system. It represents the application components and determines their
appropriate placement and use within the overall architecture.
In the design model, packages contain the design elements of the system, such
as design classes, interfaces, and design subsystems, that evolve from the
analysis classes. Each package can contain any number of subpackages that
further partition the contained design elements. These architectural layers form
the basis for a second-level organization of the elements that describe the
specifications and implementation details of the system.
Within each package, sequence diagrams illustrate how the objects in the classes
interact, state machine diagrams to model the dynamic behavior in classes,
component diagrams to describe the software architecture of the system, and
deployment diagrams to describe the physical architecture of the system.
Software Architecture
Software Architecture defines fundamental organization of a system and more
simply defines a structured solution. It defines how components of a software
system are assembled, their relationship and communication between them. It
serves as a blueprint for software application and development basis for
developer team.
Software architecture defines a list of things which results in making many things
easier in the software development process.
• A software architecture defines structure of a system.
• A software architecture defines behavior of a system.
Besides all these software architectures is also important for many other factors
like quality of software, reliability of software, maintainability of software,
Supportability of software and performance of software and so on.
Architectural Styles
The software needs the architectural design to represent the design of the
software. IEEE defines architectural design as “the process of defining a collection
of hardware and software components and their interfaces to establish the
framework for the development of a computer system.” The software that is built
for computer-based systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of:
The use of architectural styles is to establish a structure for all the components
of the system.
Architectural styles provide several benefits. The most important of these benefits
is that they provide a common language.
Architectural Design
1. It defines an abstraction level at which the designers can specify the functional
and performance behavior of the system.
4. It develops and documents top-level design for the external and internal
interfaces.
6. It defines and documents preliminary test requirements and the schedule for
software integration.
Component-Level Design
Functional Independence
Coupling
Coupling measures the degree of interdependence among the modules. Several
factors like interface complexity, type of data that pass across the interface, type
of communication, number of interfaces per module, etc. influence the strength
of coupling between two modules. For better interface and well-structured
system, the modules should be loosely coupled in order to minimize the ‘ripple
effect’ in which modifications in one module results in errors in other modules.
Module coupling is categorized into the following types.
1. No direct coupling: Two modules are said to be ‘no direct coupled’ if they
are independent of each other.
2. Data coupling: Two modules are said to be ‘data coupled’ if they use
parameter list to pass data items for communication.
3. Stamp coupling: Two modules are said to be ‘stamp coupled’ if they
communicate by passing a data structure that stores additional information
than what is required to perform their functions.
4. Control coupling: Two modules are said to be ‘control coupled’ if they
communicate (pass a piece of information intended to control the internal
logic) using at least one ‘control flag’. The control flag is a variable whose
value is used by the dependent modules to make decisions.
5. Content coupling: Two modules are said to be ‘content coupled’ if one
module modifies data of some other module or one module is under the
control of another module or one module branches into the middle of
another module.
6. Common coupling: Two modules are said to be ‘common coupled’ if they
both reference a common data block.
Cohesion
“User Experience Design” is often used interchangeably with terms such as “User
Interface Design” and “Usability.” However, while usability and user interface (UI)
design are important aspects of UX design, they are subsets of it.
User interface is the front-end application view to which user interacts in order to
use the software. The software becomes more popular if its user interface is:
• Attractive
• Simple to use
• Responsive in short time
• Clear to understand
• Consistent on all interface screens
Golden Rules:
The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface. Place the user in control:
• Define the interaction modes in such a way that does not force the user
into unnecessary or undesired actions: The user should be able to easily
enter and exit the mode with little or no effort.
• Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some might
use mouse, some might use touch screen, etc, Hence all interaction
mechanisms should be provided.
• Allow user interaction to be interruptible and undoable: When a user is
doing a sequence of actions the user must be able to interrupt the sequence
to do some other work without losing the work that had been done. The
user should also be able to do undo operation.
• Streamline interaction as skill level advances and allow the interaction to
be customized: Advanced or highly skilled user should be provided a chance
to customize the interface as user wants which allows different interaction
mechanisms so that user doesn’t feel bored while using the same
interaction mechanism.
• Hide technical internals from casual users: The user should not be aware
of the internal technical details of the system. He should interact with the
interface just to do his work.
• Design for direct interaction with objects that appear on screen: The user
should be able to use the objects and manipulate the objects that are
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen.
• Allow the user to put the current task into a meaningful context: Many
interfaces have dozens of screens. So it is important to provide indicators
consistently so that the user know about the doing work. The user should
also know from which page has navigated to the current page and from the
current page where can navigate.
• Maintain consistency across a family of applications: The development of
some set of applications all should follow and implement the same design,
rules so that consistency is maintained among applications.
• If past interactive models have created user expectations do not make
changes unless there is a compelling reason.
There are several key principles that software engineers should follow
when designing user interfaces:
There are three general classes of mobile code systems: remote evaluation, code-
on-demand and mobile agent.
1. Remote evaluation: A component on the source host has the know-how but
not the resources needed for performing a service. The component is transferred
to the destination host, where it is executed using the available resource. The
result is returned to the source host.
2. Code-on-Demand:
Here the needed resources are available locally, but the know-how is not known.
The local sub-system thus requests the component providing the know-how from
the appropriate remote host. The code-on-demand requires the same steps as
remote-evaluation, the only difference is that roles of target and destination hosts
are reserved.
3. Mobile agent:
If a component on a given host has the know-how for providing some service,
has some execution state and has access to some of the resources needed to
provide that service along with its state and local resources, may migrate to the
destination host, which may have remaining resources need for providing the
service. From a software architectural perspective, mobile agents are stateful
software components.
There are some factors which has to be taken care of while migrating the code,
some architectural concerns of which engineers must be aware are:
Pattern-Based Design
Design patterns can be broken down into three types, organized by their intent
into creational design patterns, structural design patterns, and behavioral design
patterns.
A creational design pattern deals with object creation and initialization, providing
guidance about which objects are created for a given situation. These design
patterns are used to increase flexibility and to reuse existing code.
• Factory Method: Creates objects with a common interface and lets a class
defer instantiation to subclasses.
• Abstract Factory: Creates a family of related objects.
• Builder: A step-by-step pattern for creating complex objects, separating
construction and representation.
• Prototype: Supports the copying of existing objects without code
becoming dependent on classes.
• Singleton: Restricts object creation for a class to only one instance.
A structural design pattern deals with class and object composition, or how to
assemble objects and classes into larger structures.
5. Lower the Size of the Codebase - Each pattern helps software developers
change how the system works without a full redesign. Further, as the “optimal”
solution, the design pattern often requires less code.
UNIT III SYSTEM DEPENDABILITY AND SECURITY
Dependable Systems
•Availability: the system and services are mostly available, with very little or
no down time.
•Safety: the systems do not pose unacceptable risks to the environment or the
health of users.
•Confidentiality: data and other information should not be divulged without intent
and authorization.
•Integrity: System data should not be modified without intent and authorization.
These attributes have some overlap among themselves. For example, just like
security, it is a weakest link phenomenon, in that the strength of the whole is
determined by the weakest link in the chain. Thus, for a product or system to be
considered dependable, it should possess all the aforementioned attributes.
Conversely, a system is not dependable in proportion to the degree of lack of
these dependability attributes. In most cases, dependability is also not a binary
phenomenon (present or absent) but based on gradations and acceptable
thresholds. These thresholds are specific to infrastructures such as electronic,
electromechanical, and quantum, as well as applications, such as
communications, process control, and data processing.
One of the keys for dependable systems is that they should be empirically
verifiable in terms of their dependability. That means that fashionable or trendy
methodologies that may be very popular need to be objectively assessed on the
basis of their true effectiveness. One of the measures for dependability is the
number of faults. Faults are errors in design or implementation that
cause failures. A failure is deemed to have occurred if any of the functional
specifications of the system are not met. Failures can range from minor to
catastrophic, depending upon the impact of failure on the system and the
immediate environment. Minor failures are referred to as errors. The underlying
faults may thus be prioritized, based on their potential impact. Lack of
dependability means that the system is undependable due to shortcoming in one
or more of the dependability attributes, caused by faults in the system and
potential cause of system failure.
Faults can manifest themselves during the operation of a system. Such faults are
known as active. Otherwise, the faults may be present and possibly manifest
themselves in the future. Such faults are referred to as dormant, and the purpose
of the testing phase in systems engineering is to discover as many dormant and
active faults as possible before deployment and general use of the tested system.
Dependability Properties
Principal properties:
• Availability: The probability that the system will be up and running and
able to deliver useful services to users.
• Reliability: The probability that the system will correctly deliver services
as expected by users.
• Safety: A judgment of how likely it is that the system will cause damage
to people or its environment.
• Security: A judgment of how likely it is that the system can resist
accidental or deliberate intrusions.
• Resilience: A judgment of how well a system can maintain the continuity
of its critical services in the presence of disruptive events such as
equipment failure and cyberattacks.
Socio-technical systems
Software engineering is not an isolated activity but is part of a broader systems
engineering process. Software systems are therefore not isolated systems but
are essential components of broader systems that have a human, social or
organizational purpose.
There are interactions and dependencies between the layers in a system and
changes at one level ripple through the other levels. For dependability, a
systems perspective is essential.
Dependable processes
Explicitly defined
A process that has a defined process model that is used to drive the software
production process. Data must be collected during the process that proves that
the development team has followed the process as defined in the process model.
Repeatable
A process that does not rely on individual interpretation and judgment. The
process can be repeated across projects and with different team members,
irrespective of who is involved in the development.
Formal Methods
1. Requirements engineering
2. Architecture design
3. Implementation
4. Testing
5. Maintenance
6. Evolution
Some may argue that all these steps usually take place, but they must, to some
extent for at least usable software with longer perspectives for exploitation. Some
of the earlier steps – particularly design stages – may bring a sense of uncertainty
in terms of unforeseen problems later in the process. The reasons could be:
B METHOD
Z NOTATION
EVENT-B
Evaluation
Before deciding on the use of formal methods, each architect must list the pros
and cons against resources available, as well as the system’s needs.
BENEFITS
Reliability Engineering
Objectives
The reason for the priority emphasis is that it is by far the most effective way of
working, in terms of minimizing costs and generating reliable products. The
primary skills that are required, therefore, are the ability to understand and
anticipate the possible causes of failures, and knowledge of how to prevent them.
It is also necessary to have knowledge of the methods that can be used for
analyzing designs and data.
What is Availability?
What is Reliability?
The formal definition of reliability does not always reflect the user's
perception of a system's reliability. Reliability can only be defined formally with
respect to a system specification i.e. a failure is a deviation from a
specification. Users don't read specifications and don't know how the system is
supposed to behave; therefore, perceived reliability is more important in practice.
Removing X% of the faults in a system will not necessarily improve the reliability
by X%. Program defects may be in rarely executed sections of the code so may
never be encountered by users. Removing these does not affect the perceived
reliability. Users adapt their behavior to avoid system features that may fail for
them. A program with known faults may therefore still be perceived as reliable
by its users.
Reliability Requirements
Fault-Tolerant Architectures
• Flight control systems, where system failure could threaten the safety of
passengers;
• Reactor systems where failure of a control system could lead to a chemical
or nuclear emergency;
• Telecommunication systems, where there is a need for 24/7 availability.
Program components should only be allowed access to data that they need for
their implementation. This means that accidental corruption of parts of the
program state by these components is impossible. You can control visibility by
using abstract data types where the data representation is private and you only
allow access to the data through predefined operations such as get () and put ().
All program take inputs from their environment and make assumptions about
these inputs. However, program specifications rarely define what to do if an input
is not consistent with these assumptions. Consequently, many programs behave
unpredictably when presented with unusual inputs and, sometimes, these are
threats to the security of the system. Consequently, you should always check
inputs before processing against the assumptions made about these inputs.
Error-prone constructs:
For systems that involve long transactions or user interactions, you should always
provide a restart capability that allows the system to restart after failure without
users having to redo everything that they have done.
Always give constants that reflect real-world values (such as tax rates) names
rather than using their numeric values and always refer to them by name You are
less likely to make mistakes and type the wrong value when you are using a name
rather than a value. This means that when these 'constants' change (for sure,
they are not really constant), then you only have to make the change in one place
in your program.
Reliability Measurement
1. Product Metrics
Product metrics are those which are used to build the artifacts, i.e., requirement
specification documents, system design documents, etc. These metrics help in
the assessment if the product is right sufficient through records on attributes like
usability, reliability, maintainability & portability. In these measurements are
taken from the actual body of the source code.
3. Process Metrics
Process metrics quantify useful attributes of the software development process &
its environment. They tell if the process is functioning optimally as they report on
characteristics like cycle time & rework time. The goal of process metric is to do
the right job on the first time through the process. The quality of the product is
a direct function of the process. So process metrics can be used to estimate,
monitor, and improve the reliability and quality of software. Process metrics
describe the effectiveness and quality of the processes that produce the software
product.
Examples are:
To achieve this objective, a number of faults found during testing and the failures
or other problems which are reported by the user after delivery are collected,
summarized, and analyzed. Failure metrics are based upon customer information
regarding faults found after release of the software. The failure data collected is
therefore used to calculate failure density, Mean Time between Failures
(MTBF), or other parameters to measure or predict software reliability.
Agile methods are not usually used for safety-critical systems engineering.
Extensive process and product documentation is needed for system regulation,
which contradicts the focus in agile methods on the software itself. A detailed
safety analysis of a complete system specification is important, which contradicts
the interleaved development of a system specification and program. However,
some agile techniques such as test-driven development may be used.
Static program analysis uses software tools for source text processing. They
parse the program text and try to discover potentially erroneous conditions and
bring these to the attention of the V & V team. They are very effective as an aid
to inspections - they are a supplement to but not a replacement for inspections.
The static analyzer can check for patterns in the code that are characteristic of
errors made by programmers using a particular language.
Users of a programming language define error patterns, thus extending the types
of error that can be detected. This allows specific rules that apply to a program
to be checked.
Assertion checking
Developers include formal assertions in their program and relationships that must
hold. The static analyzer symbolically executes the code and highlights potential
problems.
Safety-critical systems
Safety terminology
Term Definition
An unplanned event or sequence of events which results in human death
Accident
or injury, damage to property, or to the environment. An overdose of
(mishap)
insulin is an example of an accident.
A measure of the loss resulting from a mishap. Damage can range from
Damage many people being killed as a result of an accident to minor injury or
property damage.
This is a measure of the probability that the system will cause an accident.
Risk The risk is assessed by considering the hazard probability, the hazard
severity, and the probability that the hazard will lead to an accident.
Hazard avoidance
The system is designed so that some classes of hazard simply cannot arise.
Damage limitation
The system includes protection features that minimize the damage that may
result from an accident.
Accidents in complex systems rarely have a single cause as these systems are
designed to be resilient to a single point of failure. Almost all accidents are a
result of combinations of malfunctions rather than single failures. It is probably
the case that anticipating all problem combinations, especially, in software-
controlled systems is impossible so achieving complete safety is impossible.
However, accidents are inevitable.
Safety Requirements
Hazard-driven analysis:
Hazard identification
Identify the hazards that may threaten the system. Hazard identification may be
based on different types of hazards: physical, electrical, biological, service
failure, etc.
Hazard assessment
The process is concerned with understanding the likelihood that a risk will
arise and the potential consequences if an accident or incident should occur.
Risks may be categorized as: intolerable (must never arise or result in an
accident), as low as reasonably practical - ALARP (must minimize the
possibility of risk given cost and schedule constraints), and acceptable (the
consequences of the risk are acceptable and no extra costs should be incurred to
reduce hazard probability).
Hazard assessment process: for each identified hazard, assess hazard probability,
accident severity, estimated risk, acceptability.
Hazard analysis
• Put the risk or hazard at the root of the tree and identify the system states
that could lead to that hazard.
• Where appropriate, link these with 'and' or 'or' conditions.
• A goal should be to minimize the number of single causes of system failure.
Risk reduction
Safety Cases
A safety case is a written proof that identifies the hazards and risks of a
manufactured product or installation. A safety case is a structured argument,
supported by evidence, intended to justify that a system is acceptably safe, and
when there is danger or damage to make it as low as reasonably possible
(ALARP). In industries like transportation and medicine, safety cases are
mandatory and legally binding. Safety cases tend to be presented in a document
of textual information and requirements accompanied by a graphical notation.
The most popular way to this graphical notation is using the Goal Structure
Notation (GSN). Even though a requirement in the automotive ISO 26262, the
GSN notation is not some farfetched complex. It is basically sets the goals, the
strategies justifying the claims and evidence, and a solution to make that goal
safe.
The elements of the Goal Structured Notation have a Symbol plus a count, and
are inside a shape. They are as following: (*N represents a number that grows
to N+1 on each preceding)
• A goal G(N), are rectangles, setting up and objective or sub objective of
the safety case.
• A strategy S(N), represented in a parallelogram, describes process or
inference between a goal and its supporting goal(s) and solutions.
• A solution Sn(N) shown inside a circle, demonstrates a reference or proof.
• A context C(N), shown like a square with curved edges. It defines the limits
that apply to the outlined structure.
• A justification J(N), rendered as an oval shows a rational or logical
statement
• An assumption A(N), also rendered as an oval, presents an intentionally
unsubstantiated statement.
So, considering how a safety case in the GSN notation is structured any program
that can make sketches like Microsoft Visio or mind map could work. But there is
a tool specifically for this it is called ASTHA-GSN. It has an student license, and
the tool has some major pluses:
Security Engineering
Dimensions of security:
Reliability terminology
Term Description
Something of value which has to be protected. The asset may be
Asset
the software system itself or data used by that system.
An exploitation of a system's vulnerability. Generally, this is from
Attack outside the system and is a deliberate attempt to cause some
damage.
A protective measure that reduces a system's vulnerability.
Control Encryption is an example of a control that reduces a vulnerability of
a weak access control system.
Possible loss or harm to a computing system. This can be loss or
Exposure damage to data, or can be a loss of time and effort if recovery is
necessary after a security breach.
Circumstances that have potential to cause loss or harm. You can
Threat think of these as a system vulnerability that is subjected to an
attack.
A weakness in a computer-based system that may be exploited to
Vulnerability
cause loss or harm.
Vulnerability avoidance
The system is designed so that vulnerabilities do not occur. For example, if there
is no external network connection then external attack is impossible.
If a system is attacked and the system or its data are corrupted as a consequence
of that attack, then this may induce system failures that compromise the
reliability of the system.
Resilience is a system characteristic that reflects its ability to resist and recover
from damaging events. The most probable damaging event on networked
software systems is a cyberattack of some kind so most of the work now done in
resilience is aimed at deterring, detecting and recovering from such attacks.
Security policies should set out general information access strategies that
should apply across the organization. The point of security policies is to inform
everyone in an organization about security so these should not be long and
detailed technical documents. From a security engineering perspective, the
security policy defines, in broad terms, the security goals of the organization. The
security engineering process is concerned with implementing these goals.
For sensitive personal information, a high level of security is required; for other
information, the consequences of loss may be minor so a lower level of security
is adequate.
The security policy should set out what is expected of users e.g. strong
passwords, log out of computers, office security, etc.
Existing security procedures and technologies that should be maintained
For reasons of practicality and cost, it may be essential to continue to use existing
approaches to security even where these have known limitations.
The aim of this initial risk assessment is to identify generic risks that are
applicable to the system and to decide if an adequate level of security can be
achieved at a reasonable cost. The risk assessment should focus on the
identification and analysis of high-level risks to the system. The outcomes of the
risk assessment process are used to help identify security requirements.
This risk assessment takes place during the system development life cycle and is
informed by the technical system design and implementation decisions. The
results of the assessment may lead to changes to the security requirements and
the addition of new requirements. Known and potential vulnerabilities are
identified, and this knowledge is used to inform decision making about the system
functionality and how it is to be implemented, tested, and deployed.
This risk assessment process focuses on the use of the system and the possible
risks that can arise from human behavior. Operational risk assessment should
continue after a system has been installed to take account of how the system is
used. Organizational changes may mean that the system is used in different ways
from those originally planned. These changes lead to new security requirements
that have to be implemented as the system evolves.
Security Requirements
• Risk avoidance requirements set out the risks that should be avoided by
designing the system so that these risks simply cannot arise.
• Risk detection requirements define mechanisms that identify the risk if it
arises and neutralize the risk before losses occur.
• Risk mitigation requirements set out how the system should be designed
so that it can recover from and restore system assets after some loss has
occurred.
• Asset identification: identify the key system assets (or services) that
have to be protected.
• Asset value assessment: estimate the value of the identified assets.
• Exposure assessment: assess the potential losses associated with each
asset.
• Threat identification: identify the most probable threats to the system
assets.
• Attack assessment: decompose threats into possible attacks on the
system and the ways that these may occur.
• Control identification: propose the controls that may be put in place to
protect an asset.
• Feasibility assessment: assess the technical feasibility and cost of the
controls.
• Security requirements definition: define system security requirements.
These can be infrastructure or application system requirements.
Distributing assets means that attacks on one system do not necessarily lead to
complete loss of system service. Each platform has separate protection features
and may be different from other platforms so that they do not share a common
vulnerability. Distribution is particularly important if the risk of denial-of-service
attacks is high.
These are potentially conflicting. If assets are distributed, then they are more
expensive to protect. If assets are protected, then usability and performance
requirements may be compromised.
The main goal of Security Testing is to identify the threats in the system and
measure its potential vulnerabilities, so the threats can be encountered and the
system does not stop functioning or cannot be exploited. It also helps in detecting
all possible security risks in the system and helps developers to fix the problems
through coding.
There are seven main types of security testing as per Open-Source Security
Testing methodology manual. They are explained as follows:
• Vulnerability Scanning: This is done through automated software to scan
a system against known vulnerability signatures.
• Security Scanning: It involves identifying network and system
weaknesses, and later provides solutions for reducing these risks. This
scanning can be performed for both Manual and Automated scanning.
• Penetration testing: This kind of testing simulates an attack from a
malicious hacker. This testing involves analysis of a particular system to
check for potential vulnerabilities to an external hacking attempt.
• Risk Assessment: This testing involves analysis of security risks observed
in the organization. Risks are classified as Low, Medium and High. This
testing recommends controls and measures to reduce the risk.
• Security Auditing: This is an internal inspection of Applications and
Operating systems for security flaws. An audit can also be done via line-
by-line inspection of code
• Ethical hacking: It’s hacking an Organization Software system. Unlike
malicious hackers, who steal for their own gains, the intent is to expose
security flaws in the system.
• Posture Assessment: This combines Security scanning, Ethical Hacking
and Risk Assessments to show an overall security posture of an
organization.
It is always agreed, that cost will be more if we postpone security testing after
software implementation phase or after deployment. So, it is necessary to involve
security testing in the SDLC life cycle in the earlier phases.
Let’s look into the corresponding Security processes to be adopted for every
phase in SDLC
SDLC Phases Security Processes
Security analysis for requirements and check abuse/misuse
Requirements
cases
Security risks analysis for designing. Development of Test
Design
Plan including security tests
Coding and Unit
Static and Dynamic Testing and Security White Box Testing
Testing
Integration Testing Black Box Testing
System Testing Black Box Testing and Vulnerability scanning
Implementation Penetration Testing, Vulnerability Scanning
Support Impact analysis of Patches
In security testing, different methodologies are followed, and they are as follows:
• Tiger Box: This hacking is usually done on a laptop which has a collection
of OSs and hacking tools. This testing helps penetration testers and security
testers to conduct vulnerabilities assessment and attacks.
• Black Box: Tester is authorized to do testing on everything about the
network topology and the technology.
• Grey Box: Partial information is given to the tester about the system, and
it is a hybrid of white and black box models.
The benefits of SSA extend from the companies that develop software to
the end users of that software. When procuring a third-party application, SSA
assures that you’re getting code built from the ground up with security in mind.
For software-led businesses that sell software to other companies or users, SSA
increases trust in your code. And, when coding custom web applications in-
house for your own company’s use, SSA can significantly reduce the likelihood of
breaches or compromises from basic security mistakes.
It’s important not to confuse the concept of SSA with the popular idea of shifting
security to the left. Shift left guard mainly focuses on moving security checks and
tests to earlier phases of the development cycle.
Software security assurance also differs from quality assurance in that the latter
is about ensuring software engineering processes meet defined policies and
standards, usually through testing. Security assurance, on the other hand, is
all about ensuring that software conforms to its security requirements
and doesn’t include any functionality that could compromise security.
Resilience Engineering
The resilience of a system is a judgment of how well that system can maintain
the continuity of its critical services in the presence of disruptive events, such
as equipment failure and cyberattacks. This view encompasses these three ideas:
Four related resilience activities are involved in the detection and recovery
from system problems:
Cybersecurity
Cybercrime is the illegal use of networked systems and is one of the most
serious problems facing our society. Cybersecurity is a broader topic than system
security engineering. Cybersecurity is a socio-technical issue covering all
aspects of ensuring the protection of citizens, businesses, and critical
infrastructures from threats that arise from their use of computers and the
Internet. Cybersecurity is concerned with all of an organization's IT assets from
networks through to application systems.
Cybersecurity threats:
Socio-technical Resilience
Resilience engineering is concerned with adverse external events that can lead to
system failure. To design a resilient system, you have to think about socio-
technical systems design and not exclusively focus on software. Dealing with
these events is often easier and more effective in the broader socio-technical
system.
Organizations should monitor both their internal operations and their external
environment for threats before they arise.
A resilient organization should not simply focus on its current operations but
should anticipate possible future events and changes that may affect its
operations and resilience.
People inevitably make mistakes (human errors) that sometimes lead to serious
system failures. There are two ways to consider human error:
Systems engineers should assume that human errors will occur during system
operation. To improve the resilience of a system, designers have to think about
the defense and barriers to human error that could be part of a system. Can
these barriers should be built into the technical components of the system
(technical barriers)? If not, they could be part of the processes, procedures and
guidelines for using the system (socio-technical barriers).
Defensive layers have vulnerabilities: they are like slices of Swiss cheese
with holes in the layer corresponding to these vulnerabilities. Vulnerabilities are
dynamic: the 'holes' are not always in the same place and the size of the holes
may vary depending on the operating conditions. System failures occur when the
holes line up and all of the defenses fail.
• Identifying critical services and assets that allow a system to fulfill its
primary purpose.
• Designing system components that support problem recognition,
resistance, recovery and reinstatement.
Service-oriented Architecture
Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request of a given service consumer.
This practice is known as service orchestration Another important interaction
pattern is service choreography, which is the coordinated interaction of services
without a single point of control.
Components of SOA:
Advantages of SOA:
Disadvantages of SOA:
RESTful Services
REST emerged as the predominant Web service design model just a couple of
years after its launch, measured by the number of Web services that use it. Owing
to its more straightforward style, it has mostly displaced SOAP and WSDL-based
interface design.
RESTful Architecture:
Service Engineering
Service engineering is the process of developing services for reuse in service-
oriented applications. The service has to be designed as a reusable abstraction
that can be used in different systems. Generally useful functionality associated
with that abstraction must be designed and the service must be robust and
reliable. The service must be documented so that it can be discovered and
understood by potential users.
Service Composition
Existing services are composed and configured to create new composite services
and applications. The basis for service composition is often a workflow. Workflows
are logical sequences of activities that, together, model a coherent business
process. For example, provide a travel reservation services which allows flights,
car hire and hotel bookings to be coordinated.
Systems Engineering
Systems Engineering is an engineering field that takes an interdisciplinary
approach to product development. Systems engineers analyze the collection of
pieces to make sure when working together, they achieve the intended objectives
or purpose of the product. For example, in automotive development, a propulsion
system or braking system will involve mechanical engineers, electrical engineers,
and a host of other specialized engineering disciplines. A systems engineer will
focus on making each of the individual systems work together into an integrated
whole that performs as expected across the lifecycle of the product.
A systems engineer is tasked with looking at the entire integrated system and
evaluating it against its desired outcomes. In that role, the systems engineer
must know a little bit about everything and have an ability to see the “big picture.”
While specialists can focus on their specific disciplines, the systems engineer must
evaluate the complex system as a whole against the initial requirements and
desired outcomes.
Systems engineers have multi-faceted roles to play but primarily assist with:
• Design compatibility
• Definition of requirements
• Management of projects
• Cost analysis
• Scheduling
• Possible maintenance needs
• Ease of operations
• Future systems upgrades
• Communication among engineers, managers, suppliers, and customers in
regards to the system’s operations
The systems engineering process can take a top-down approach, bottoms up, or
middle out depending on the system being developed. The process encompasses
all creative, manual, and technical activities necessary to define the ultimate
outcomes and see that the development process results in a product that meets
objectives.
Socio-technical systems
Socio-technical systems are large-scale systems that do not just include software
and hardware but also people, processes and organizational policies. Socio-
technical systems are often 'systems of systems' i.e., are made up of a number
of independent systems. The boundaries of socio-technical system are subjective
rather than objective: different people see the system in different ways.
There are a number of key elements in an organization that may affect the
requirements, design, and operation of a socio-technical system. A new system
may lead to changes in some or all of these elements:
• Functional properties: These appear when all the parts of a system work
together to achieve some objective. For example, a bicycle has the
functional property of being a transportation device once it has been
assembled from its components.
• Non-functional emergent properties: Examples are reliability,
performance, safety, and security. These relate to the behavior of the
system in its operational environment. They are often critical for computer-
based systems as failure to achieve some minimal defined level in these
properties may make the system unusable.
Failures are not independent and they propagate from one level to another.
System reliability depends on the context where the system is used. A system
that is reliable in one environment may be less reliable in a different environment
because the physical conditions (e.g., the temperature) and the mode of
operation is different.
Conceptual design
Conceptual design investigates the feasibility of an idea and develops that idea
to create an overall vision of a system. Conceptual design precedes and overlaps
with requirements engineering. May involve discussions with users and other
stakeholders and the identification of critical requirements. The aim of conceptual
design is to create a high-level system description that communicates the system
purpose to non-technical decision makers.
System Procurement
• The state of other organizational systems and whether or not they need to
be replaced
• The need to comply with external regulations
• External competition
• Business re-organization
• Available budget
• Off-the-shelf applications that may be used without change and which need
only minimal configuration for use.
• Configurable application or ERP systems that have to be modified or
adapted for use either by modifying the code or by using inbuilt
configuration features, such as process definitions and rules.
• Custom systems that have to be designed and implemented specially for
use.
System Development
System delivery and deployment takes place after completion, when the
system has to be installed in the customer's environment. A number of issues can
occur:
Operational processes are the processes involved in using the system for its
defined purpose. For new systems, these processes may have to be designed and
tested and operators trained in the use of the system. Operational processes
should be flexible to allow operators to cope with problems and periods of
fluctuating workload.
Problems with operation automation:
Large systems have a long lifetime. They must evolve to meet changing
requirements. Existing systems which must be maintained are sometimes called
legacy systems. Evolution is inherently costly for a number of reasons:
Computers are used to control a wide range of systems from simple domestic
machines, through games controllers, to entire manufacturing plants. Their
software must react to events generated by the hardware and, often, issue control
signals in response to these events. The software in these systems is
embedded in system hardware, often in read-only memory, and usually
responds, in real time, to events from the system's environment.
The design process for embedded systems is a system engineering process that
has to consider, in detail, the design and performance of the system hardware.
Part of the design process may involve deciding which system capabilities are to
be implemented in software and which in hardware. Low-level decisions on
hardware, support software and system timing must be considered early in the
process. These may mean that additional software functionality, such as battery
and power management, has to be included in the system.
Producer processes collect data and add it to the buffer. Consumer processes take
data from the buffer and make elements available. Producer and consumer
processes must be mutually excluded from accessing the same element. The
buffer must stop producer processes adding information to a full buffer and
consumer processes trying to take information from an empty buffer.
• Observe and React pattern is used when a set of sensors are routinely
monitored and displayed.
• Environmental Control pattern is used when a system includes sensors,
which provide information about the environment and actuators that can
change the environment.
• Process Pipeline pattern is used when data has to be transformed from
one representation to another before it can be processed.
The input values of a set of sensors of the same types are collected and analyzed.
These values are displayed in some way. If the sensor values indicate that some
exceptional condition has arisen, then actions are initiated to draw the operator's
attention to that value and, in certain cases, to take actions in response to the
exceptional value.
Stimuli - Values from sensors attached to the system and the state of the system
actuators.
A pipeline of processes is set up with data moving in sequence from one end of
the pipeline to another. The processes are often linked by synchronized buffers
to allow the producer and consumer processes to run at different speeds. The
culmination of a pipeline may be display or data storage or the pipeline may
terminate in an actuator.
Timing Analysis
The correctness of a real-time system depends not just on the correctness of its
outputs but also on the time at which these outputs were produced. In a timing
analysis, you calculate how often each process in the system must be executed
to ensure that all inputs are processed and all system responses produced in a
timely way. The results of the timing analysis are used to decide how frequently
each process should execute and how these processes should be scheduled by
the real-time operating system.
Real-time operating systems are specialized operating systems which manage the
processes in the RTS. Responsible for process management and
resource (processor and memory) allocation. May be based on a standard kernel
which is used unchanged or modified for a particular
application. Do not normally include facilities such as file management.
Scheduling strategies:
Unit Testing
Unit tests are automated and are run each time the code is changed to ensure
that new code does not break existing functionality. Unit tests are designed to
validate the smallest possible unit of code, such as a function or a method,
and test it in isolation from the rest of the system. This allows developers to
quickly identify and fix any issues early in the development process, improving
the overall quality of the software and reducing the time required for later
testing.
Unit Testing is a software testing technique by means of which individual
units of software i.e. group of computer program modules, usage procedures,
and operating procedures are tested to determine whether they are suitable
for use or not. It is a testing method using which every independent module
is tested to determine if there is an issue by the developer himself. It is
correlated with the functional correctness of the independent modules. Unit
Testing is defined as a type of software testing where individual components
of a software are tested. Unit Testing of the software product is carried out
during the development of an application. An individual component may be
either an individual function or a procedure. Unit Testing is typically performed
by the developer. In SDLC or V Model, Unit testing is the first level of testing
done before integration testing. Unit testing is such a type of testing technique
that is usually performed by developers. Although due to the reluctance of
developers to test, quality assurance engineers also do unit testing.
The Unit Testing Techniques are mainly categorized into three parts which
are Black box testing that involves testing of user interface along with input
and output, White box testing that involves testing the functional behaviour
of the software application and Gray box testing that is used to execute test
suites, test methods, test cases and performing risk analysis.
Code coverage techniques used in Unit Testing are listed below:
• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Finite State Machine Coverage
Integration Testing
Integration Testing is defined as a type of testing where software modules
are integrated logically and tested as a group. A typical software project
consists of multiple software modules, coded by different programmers. The
purpose of this level of testing is to expose defects in the interaction between
these software modules when they are integrated
Integration Testing focuses on checking data communication amongst these
modules. Hence it is also termed as ‘I & T’ (Integration and Testing), ‘String
Testing’ and sometimes ‘Thread Testing’.
Although each software module is unit tested, defects still exist for various
reasons like
• A Module, in general, is designed by an individual software developer
whose understanding and programming logic may differ from other
programmers. Integration Testing becomes necessary to verify the
software modules work in unity
• At the time of module development, there are wide chances of change
in requirements by the clients. These new requirements may not be unit
tested and hence system integration Testing becomes necessary.
• Interfaces of the software modules with the database could be
erroneous
• External Hardware interfaces, if any, could be erroneous
• Inadequate exception handling could cause issues.
Test
Test Case
Case Test Case Objective Expected Result
Description
ID
Check the interface link From Mailbox select Selected email should
2 between the Mailbox and the email and click a appear in the
Delete Mails Module delete button Deleted/Trash folder
Validation Testing
The process of evaluating software during the development process or at the
end of the development process to determine whether it satisfies specified
business requirements. Validation Testing ensures that the product actually
meets the client's needs. It can also be defined as to demonstrate that the
product fulfills its intended use when deployed on appropriate environment.
It answers to the question, Are we building the right product?
Verification:
Verification is the process of checking that a software achieves its goal without
any bugs. It is the process to ensure whether the product that is developed is
right or not. It verifies whether the developed product fulfills the requirements
that we have.
Verification is Static Testing.
Activities involved in verification:
1. Inspections
2. Reviews
3. Walkthroughs
4. Desk-checking
Validation:
Validation is the process of checking whether the software product is up to the
mark or in other words product has high level requirements. It is the process
of checking the validation of product i.e. it checks what we are developing is
the right product. it is validation of actual and expected product.
Validation is the Dynamic Testing.
Activities involved in validation:
1. Black box testing
2. White box testing
3. Unit testing
4. Integration testing
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software/product
under test is evaluated during this type of testing.
Activities:
• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing
System Testing
Advantages of Debugging:
Disadvantages of Debugging:
covered
2. Tester should write some code for test cases and execute them
1. Code coverage analysis: White box testing helps to analyze the code
coverage of an application, which helps to identify the areas of the code
that are not being tested.
2. Access to the source code: White box testing requires access to the
application’s source code, which makes it possible to test individual
functions, methods, and modules.
3. Knowledge of programming languages: Testers performing white
box testing must have knowledge of programming languages like Java,
C++, Python, and PHP to understand the code structure and write
tests.
4. Identifying logical errors: White box testing helps to identify logical
errors in the code, such as infinite loops or incorrect conditional
statements.
5. Integration testing: White box testing is useful for integration
testing, as it allows testers to verify that the different components of
an application are working together as expected.
6. Unit testing: White box testing is also used for unit testing, which
involves testing individual units of code to ensure that they are working
correctly.
7. Optimization of code: White box testing can help to optimize the
code by identifying any performance issues, redundant code, or other
areas that can be improved.
8. Security testing: White box testing can also be used for security
testing, as it allows testers to identify any vulnerabilities in the
application’s code.
Advantages:
1. White box testing is thorough as the entire code and structures are
tested.
2. It results in the optimization of code removing errors and helps in
removing extra lines of code.
3. It can start at an earlier stage as it doesn’t require any interface as in
the case of black box testing.
4. Easy to automate.
5. White box testing can be easily started in Software Development Life
Cycle.
6. Easy Code Optimization.
Some of the advantages of white box testing include:
• Testers can identify defects that cannot be detected through other
testing techniques.
• Testers can create more comprehensive and effective test cases that
cover all code paths.
•Testers can ensure that the code meets coding standards and is
optimized for performance.
However, there are also some disadvantages to white box testing, such as:
• Testers need to have programming knowledge and access to the
source code to perform tests.
• Testers may focus too much on the internal workings of the software
and may miss external issues.
• Testers may have a biased view of the software since they are familiar
with its internal workings.
Overall, white box testing is an important technique in software
engineering, and it is useful for identifying defects and ensuring that
software applications meet their requirements and specifications at the
code level
Disadvantages:
1. It is very expensive.
2. Redesigning code and rewriting code needs test cases to be written
again.
3. Testers are required to have in-depth knowledge of the code and
programming language as opposed to black-box testing.
4. Missing functionalities cannot be detected as the code that exists is
tested.
5. Very complex and at times not realistic.
6. Much more chances of Errors in production.
• Sequential Statements –
• If – Then – Else –
• Do – While –
• While – Do –
• Switch – Case –
So,
Cyclomatic Complexity V(G)
= 1 + 1
= 2
1. Formula based on Regions :
V(G) = number of regions in the graph
1. For example, consider first graph given above,
Cyclomatic complexity V(G)
= 1 (for Region 1) + 1 (for Region 2)
= 2
Hence, using all the three above formulae, the cyclomatic complexity obtained
remains same. All these three formulae can be used to compute and verify the
cyclomatic complexity of the flow graph.
Note –
1. For one function [e.g. Main( ) or Factorial( ) ], only one flow graph is
constructed. If in a program, there are multiple functions, then a
separate flow graph is constructed for each one of them. Also, in the
cyclomatic complexity formula, the value of ‘p’ is set depending of the
number of graphs present in total.
2. If a decision node has exactly two arrows leaving it, then it is counted
as one decision node. However, if there are more than 2 arrows leaving
a decision node, it is computed using this formula :
d = k - 1
1. Here, k is number of arrows leaving the decision node.
Independent Paths : An independent path in the control flow graph is the one
which introduces at least one new edge that has not been traversed before the
path is defined. The cyclomatic complexity gives the number of independent
paths present in a flow graph. This is because the cyclomatic complexity is used
as an upper-bound for the number of tests that should be executed in order to
make sure that all the statements in the program have been executed at least
once. Consider first graph given above here the independent paths would be 2
because number of independent paths is equal to the cyclomatic complexity. So,
the independent paths in above first given graph :
• Path 1:
A -> B
• Path 2:
C -> D
Note – Independent paths are not unique. In other words, if for a graph the
cyclomatic complexity comes out be N, then there is a possibility of obtaining
two different sets of paths which are independent in nature.
Design Test Cases : Finally, after obtaining the independent paths, test cases
can be designed where each test case represents one or more independent
paths.
2. Data Flow Testing : The data flow test method chooses the test path of a
program based on the locations of the definitions and uses all the variables in
the program. The data flow test approach is depicted as follows suppose each
statement in a program is assigned a unique statement number and that theme
function cannot modify its parameters or global variables. For example, with S
as its statement number.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
If statement S is an if loop statement, them its DEF set is empty and its USE set
depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from S
to statement S’ then there is no other definition of X. A definition use (DU) chain
of variable X has the form [X, S, S’], where S and S’ denote statement numbers,
X is in DEF(S) and USE(S’), and the definition of X in statement S is line at
statement S’. A simple data flow test approach requires that each DU chain be
covered at least once. This approach is known as the DU test approach. The DU
testing does not ensure coverage of all branches of a program. However, a
branch is not guaranteed to be covered by DU testing only in rare cases such as
then in which the other construct does not have any certainty of any variable in
its later part and the other part is not present. Data flow testing strategies are
appropriate for choosing test paths of a program containing nested if and loop
statements.
3. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
4. Unstructured loops – This type of loops should be redesigned,
whenever possible, to reflect the use of unstructured the structured
programming constructs.
Black box testing is a type of software testing in which the functionality of the
software is not known. The testing is done without the internal knowledge of
the products.
Black box testing can be done in the following ways:
3. Boundary value analysis – Boundaries are very good places for errors to
occur. Hence if test cases are designed for boundary values of the input domain
then the efficiency of testing improves and the probability of finding errors also
increases. For example – If the valid range is 10 to 100 then test for 10,100 also
apart from valid and invalid inputs.
Each column corresponds to a rule which will become a test case for testing. So
there will be 4 test cases.
Regression Testing: It ensures that the newly added code is compatible with
the existing code. In other words, a new software update has no impact on the
functionality of the software. This is carried out after a system maintenance
operation and upgrades.
SCM involves a set of processes and tools that help to manage the different
components of a software system, including source code, documentation, and
other assets. It enables teams to track changes made to the software system,
identify when and why changes were made, and manage the integration of
these changes into the final product.
SCM repository
In computer software engineering, software configuration management (SCM)
is any kind of practice that tracks and provides control over changes to source
code. Software developers sometimes use revision control software to maintain
documentation and configuration files as well as source code. Revision control
may also track changes to configuration files.
At the simplest level, developers could simply retain multiple copies of the
different versions of the program, and label them appropriately. This simple
approach has been used in many large software projects. While this method can
work, it is inefficient as many near-identical copies of the program have to be
maintained. This requires a lot of self-discipline on the part of developers and
often leads to mistakes. Since the code base is the same, it also requires granting
read-write-execute permission to a set of developers, and this adds the pressure
of someone managing permissions so that the code base is not compromised,
which adds more complexity. Consequently, systems to automate some or all of
the revision control process have been developed. This ensures that the majority
of management of version control steps is hidden behind the scenes.