Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SDN Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 117

Table of Contents

Unit -1 SDN: INTRODUCTION.........................................................................................................3


1.1 Evolving Network Requirements...............................................................................................3
1.1.1 The role of vendors in the evolution of SDN.....................................................................4
1.1.2 Problems in Traditional Network Devices.........................................................................5
1.1.3 Advantages of SDN............................................................................................................6
1.1.4 Disadvantages of SDN.......................................................................................................6
1.1.5 Why SDN is Important.......................................................................................................6
1.1.6 Components of Software Defining Networking (SDN).....................................................6
1.2 The SDN Approach....................................................................................................................7
1.2.1 FORCES(Forwarding and control element separation):....................................................8
1.2.2 4D approach:......................................................................................................................9
1.2.3 Ethane:.............................................................................................................................10
1.2.3.1 Principles of Ethane..................................................................................................11
1.2.4 SDN Layers......................................................................................................................11
1.2.4.1 Different Models of SDN.........................................................................................11
Difference between SDN and Traditional Networking.............................................................13
1.3 SDN architecture......................................................................................................................14
Application Layer:....................................................................................................................15
Control Layer:...........................................................................................................................15
Data Plane:................................................................................................................................15
Northbound APIs:.....................................................................................................................16
Southbound APIs:.....................................................................................................................16
1.4 & 1.5 SDN Data Plane ,Control plane and Application Plane.................................................16
1.4 SDN Data Plane..................................................................................................................16
1.5 SDN Control Plane..............................................................................................................17
1.5.1 Data Plane vs. Control Plane: What Are the Key Differences?.......................................17
1.5.2 Software-Defined Networking (SDN) Application..........................................................18
1.5.2.1 SDN application environment..................................................................................19
Top Application and Service that can benefit from SDN are:...................................................20
Security services...................................................................................................................20
Network Monitoring and Intelligence..................................................................................20
Bandwidth Management.......................................................................................................20
Content Availability..............................................................................................................21
Regulation and Compliance-Bound Applications................................................................22
High –Performance Applications.........................................................................................22
Distributed Application Control and Cloud Integration.......................................................22
UNIT - 2 SDN DATA PLANE AND CONTROL PLANE................................................................23
2.1 Data Plane functions and protocols.........................................................................................23
2.1.1 SDN Data Plane...............................................................................................................23
2.1.2 Data Plane Functions.......................................................................................................24
2.1.3 Data Plane Protocols........................................................................................................25
2.2 OpenFlow Protocol..................................................................................................................25
2.3 Flow table................................................................................................................................27
2.3.2 Flow Table Structure........................................................................................................29
2.3.3 Flow Table Pipeline..........................................................................................................32
2.4 Control Plane Functions...........................................................................................................36
2.5 Southbound Interface...............................................................................................................38
2.6 Northbound Interface...............................................................................................................40

1
2.7 SDN Controller........................................................................................................................41
2.7.1 Types of SDN Controllers:...............................................................................................41
2.7.3 Disadvantages of SDN Controllers:.................................................................................42
2.8 Distributed Controllers............................................................................................................43
2.8.1 High-Availability Clusters...............................................................................................44
2.8.2 Federated SDN Networks................................................................................................44
2.8.3 Border Gateway Protocol.................................................................................................45
2.8.4 Routing and QoS Between Domains...............................................................................46
2.8.5 Using BGP for QoS Management....................................................................................48
Unit – 3 SDN APPLICATIONS.........................................................................................................50
3.1 SDN Application Plane Architecture.......................................................................................50
3.1.1 Northbound Interface.......................................................................................................51
3.1.2 Network Services Abstraction Layer...............................................................................51
3.1.2.1 Network Applications...............................................................................................52
3.1.2.2 User Interface...........................................................................................................52
3.2 Network Services Abstraction Layer.......................................................................................52
Frenetic.................................................................................................................................54
3.3 Traffic Engineering.............................................................................................................55
3.3.1 PolicyCop....................................................................................................................56
3.4 Measurement and Monitoring.............................................................................................60
3.5 Security...............................................................................................................................62
3.5.1 OpenDaylight DDoS Application................................................................................62
3.6 Data Center Networking.....................................................................................................68
3.6.1 Big Data over SDN......................................................................................................69
3.6.2 Cloud Networking over SDN......................................................................................70
UNIT – 4 NETWORK FUNCTION VIRTUALIZATION................................................................74
4.1 Network Virtualization.............................................................................................................74
4.1.1 Components:....................................................................................................................74
4.1.2 Uses:.................................................................................................................................74
4.1.3 Advantages:......................................................................................................................75
4.1.4 Disadvantages:.................................................................................................................75
4.2 Virtual LANs............................................................................................................................75
4.2.1 The Use of Virtual LANs.................................................................................................79
4.2.2 Defining VLANs..............................................................................................................81
4.2.3 Nested VLANs.................................................................................................................81
4.3 OpenFlow VLAN Support.......................................................................................................83
4.4 NFV Concepts..........................................................................................................................85
4.4.1 Simple Example of the Use of NFV................................................................................89
4.4.2 NFV Principles.................................................................................................................91
4.4.3 High-Level NFV Framework...........................................................................................91
4.5 NFV Benefits and Requirements.............................................................................................93
4.5.1 NFV Benefits...................................................................................................................93
4.5.2 NFV Requirements...........................................................................................................94
4.6 NFV Reference Architecture....................................................................................................95
4.6.1 NFV Management and Orchestration..............................................................................96
4.6.2 Reference Points..............................................................................................................96
4.6.3 Implementation................................................................................................................97
UNIT -V NFV FUNCTIONALITY...................................................................................................99
5.1 Network Functions Virtualization............................................................................................99
5.1.1 Need of NFV....................................................................................................................99

2
5.1.2 Advantages:....................................................................................................................100
5.1.3 Working:.........................................................................................................................100
5.1.4 Benefits of NFV:............................................................................................................100
5.1.5 Risks of NFV:.................................................................................................................101
5.2 Virtualized Network Functions..............................................................................................101
5.2.1 VNF Interfaces...............................................................................................................102
5.2.2 VNFC to VNFC Communication...................................................................................103
5.2.3 VNF Scaling...................................................................................................................105
5.3 NFV Management and Orchestration....................................................................................105
5.3.1 Virtualized Infrastructure Manager................................................................................106
5.3.2 Virtual Network Function Manager...............................................................................107
5.3.3 NFV Orchestrator...........................................................................................................108
5.3.4 Repositories....................................................................................................................108
5.3.5 Element Management....................................................................................................108
5.3.6 OSS/BSS........................................................................................................................109
5.4 NFV Use Cases......................................................................................................................109
5.4.1 Architectural Use Cases.................................................................................................110
5.4.1.1 NFVI as a Service...................................................................................................110
5.4.1.2 VNF as a Service....................................................................................................111
5.4.1.3 Virtual Network Platform as a Service...................................................................111
5.4.1.4 VNF Forwarding Graphs........................................................................................112
5.4.1.5 Service-Oriented Use Cases...................................................................................112
5.5 SDN and NFV...................................................................................................................114

Unit -1 SDN: INTRODUCTION

1.1 Evolving Network Requirements

Software-defined networking is an evolving network architecture beheading the traditional network


architecture focusing its disadvantages in a limited perspective. A couple of decades before, programming
and networking were viewed as different domains which today with the lights of SDN bridging themselves
together. This is to overcome the existing challenges faced by the networking domain and an attempt to
propose cost-efficient effective and feasible solutions. Changes to the existing network architecture are
inevitable considering the volume of connected devices and the data being held together. SDN introduces a
decoupled architecture and brings customization within the network making it easy to configure, manage,
and troubleshoot.

Software-defined networking, or SDN, is a strategy that splits the control plane from the forwarding plane
and pushes management and configuration to centralized consoles.

3
SDN is now over 10 years old. When the history of SDN began, many people thought gleaming software-
defined networks would replace tightly coupled, vertically integrated network products. The massive data
centers of Amazon, Facebook and Google all moved to SDN, but why isn't SDN everywhere?

Well, it is, even if it's not always called SDN.

The principles of SDN are alive and well, thanks, in part, to cloud computing. All of today's major cloud
providers use SDN. As more workloads move to cloud environments, more organisations will use SDN. Let's
look at the evolution of SDN to see how it got to this point.

1.1.1 The role of vendors in the evolution of SDN

In the corporate data center, practically everything is virtualized -- from workloads to servers to networking.
VMware, the king of the virtualized data center, bought Nicira and rebranded its SDN-style networking as
VMware NSX. Hundreds of thousands of virtual machines in data centers around the world run on NSX,
which means they run on SDN.

Cisco -- the company that initially scoffed at SDN, because it threatened the status quo -- eventually hopped
on the bandwagon and introduced an SDN variant, Cisco

Application Centric Infrastructure, to the market, trying to embrace the future without letting go of the past.

Other networking companies began to turn to SDN, as well. Juniper Networks embraced SDN in its Contrail
products, and Arista Networks integrated SDN principles into its Extensible Operating System in an attempt
to bring a new software-defined cloud networking to the market.

Smaller vendors, like Dell Technologies and Hewlett Packard Enterprise, used the SDN strategy to open up
their platforms, split tightly coupled hardware and software apart, and inject customer choice into the
process. While not necessarily SDN, this open networking strategy is an important part of the evolution of
SDN's overall viability.

4
1.1.2 Problems in Traditional Network Devices

• They are vendor specific


• Hardware & Software is bundled together
• Very costly
• New features can only be added at the will of the vendor.
• Client can only request the features, vendor will decide whether to add those features or not & the
time frame in which these features will become available is at the sole discretion of the vendor.
• Devices are function specific. You can not make your router behave like load balancer or make your
switch behave like a firewall or vice versa.
• If your network consists of hundred of these devices, each device has to be configured individually.
There is no centralized management.
• Innovations are very rare. Last 3 decades have not seen many innovations in networking. Whereas
Compute and storage industry has seen drastic changes such as compute virtualization & storage
virtualization. Networking has not been able to keep pace with other ingredients of Cloud
Computing.

5
1.1.3 Advantages of SDN

• The network is programmable and hence can easily be modified via the controller rather than
individual switches.
• Switch hardware becomes cheaper since each switch only needs a data plane.
• Hardware is abstracted, hence applications can be written on top of the controller independent of the
switch vendor.
• Provides better security since the controller can monitor traffic and deploy security policies. For
example, if the controller detects suspicious activity in network traffic, it can reroute or drop the
packets.

1.1.4 Disadvantages of SDN

• The central dependency of the network means a single point of failure, i.e. if the controller gets
corrupted, the entire network will be affected.
• The use of SDN on a large scale is not properly defined and explored.

1.1.5 Why SDN is Important

• Better Network Connectivity: SDN provides very better network connectivity for sales, services,
and internal communications. SDN also helps in faster data sharing.
• Better Deployment of Applications: Deployment of new applications, services, and many business
models can be speed up using Software Defined Networking.
• Better Security: Software-defined network provides better visibility throughout the network.
Operators can create separate zones for devices that require different levels of security. SDN
networks give more freedom to operators.
• Better Control with High Speed: Software-defined networking provides better speed than other
networking types by applying an open standard software-based controller.

In short, it can be said that- SDN acts as a “Bigger Umbrella or a HUB” where the rest of other networking
technologies come and sit under that umbrella and get merged with another platform to bring out the best of
the best outcome by decreasing the traffic rate and by increasing the efficiency of data flow.

1.1.6 Components of Software Defining Networking (SDN)

The three main components that make the SDN are:

1. SDN Applications: SDN Applications relay requests or networks through SDN Controller using
API.

6
2. SDN controller: SDN Controller collects network information from hardware and sends this
information to applications.
3. SDN networking devices: SDN Network devices help in forwarding and data processing tasks.

1.2 The SDN Approach

In traditional networks, the control and data plane are embedded together as a single unit. The control plane
is responsible for maintaining the routing table of a switch which determines the best path to send the
network packets and the data plane is responsible for forwarding the packets based on the instructions given
by the control plane. Whereas in SDN, the control plane and data plane are separate entities, where the
control plane acts as a central controller for many data planes.

7
There are many approaches that lead to the development of today’s Software Defined Networks(SDN). They
are:

● Force
● 4D approach
● Ethane

1.2.1 FORCES(Forwarding and control element separation):

The idea of separation of data plane(forwarding element) and control plane was first proposed by FORCES.
It is said that hardware-based forwarding entities are controlled by a software-based control plane.

FORCES can be implemented in two ways:

1. The forwarding element and control plane are located within the same network device
2. The control element is taken off the device and placed in a separate system.

8
1.2.2 4D approach:

The 4D approach has four planes that control

● Decision
● Dissemination
● Discovery
● Data

It follows three principles:

Network-level objectives: The objectives should be stated in terms of the whole network instead of
individual devices. So that there won’t be any need to depend on proprietary devices.

Network-wide view: Decisions should be made based on the understanding of the whole network’s traffic,
topology, and events. Actions should be taken based on considering a network-wide view.

Direct control: The control plane elements should directly be able to control the data plane elements. It
should have the ability to program the forwarding table on individual devices.

9
1.2.3 Ethane:

Ethane specifies network-level access of users which is defined by network administrators. Ethane is
the exact forerunner of Software Defined Networks(SDN)

10
1.2.3.1 Principles of Ethane

● High-level policies should inspect the network


● Routing should follow High-level policies.
● There should be a connection between packets and their origin in the network.

1.2.4 SDN Layers

The layers communicate via a set of interfaces called the north-bound APIs(between the application and
control layer) and southbound APIs(between the control and infrastructure layer).

1.2.4.1 Different Models of SDN


There are several models, which are used in SDN:

1. Open SDN
2. SDN via APIs
3. SDN via Hypervisor-based Overlay Network
4. Hybrid SDN

1. Open SDN: Open SDN is implemented using the OpenFlow switch. It is a straightforward
implementation of SDN. In Open SDN, the controller communicates with the switches using south-bound
API with the help of OpenFlow protocol.

11
2. SDN via APIs: In SDN via API, the functions in remote devices like switches are invoked using
conventional methods like SNMP or CLI or through newer methods like Rest API. Here, the devices are
provided with control points enabling the controller to manipulate the remote devices using APIs.

3. SDN via Hypervisor-based Overlay Network: In SDN via the hypervisor, the configuration of physical
devices is unchanged. Instead, Hypervisor based overlay networks are created over the physical network.
Only the devices at the edge of the physical network are connected to the virtualized networks, thereby
concealing the information of other devices in the physical network.

12
4. Hybrid SDN: Hybrid Networking is a combination of Traditional Networking with software-defined
networking in one network to support different types of functions on a network.

Difference between SDN and Traditional Networking

13
Software Defined Networking Traditional Networking

Software Defined Network is a virtual A traditional network is the old conventional


networking approach. networking approach.

Software Defined Network is centralized


Traditional Network is distributed control.
control.

This network is programmable. This network is non programmable.

Software Defined Network is the open


A traditional network is a closed interface.
interface.

In Software Defined Network data plane and In a traditional network data plane and control
control, the plane is decoupled by software. plane are mounted on the same plane.

1.3 SDN architecture

Software Defined Networking (SDN) architecture revolutionizes traditional networking by decoupling the
control plane from the data plane, introducing centralized control and programmability. The architecture

14
comprises several key components that work together to provide flexibility, scalability, and efficiency in
network management and operation.

Application Layer:

At the top layer of the SDN architecture are the applications that leverage the programmable nature of SDN
to enable various network services and functionalities. These applications interact with the SDN controller
through APIs to implement network policies, traffic engineering, security, and other services.

Control Layer:

The control layer houses the SDN controller, which serves as the brain of the network. The controller
communicates with the applications through APIs and is responsible for orchestrating network behavior
based on the high-level policies defined by the applications. It abstracts the underlying network infrastructure
and translates application policies into low-level network configurations.

Data Plane:

The data plane comprises the network devices such as switches, routers, and access points that forward
data packets based on the instructions received from the SDN controller. Unlike traditional networking,
where the control and data planes are tightly integrated within the network devices, SDN separates these
planes, allowing for centralized control and programmability.

15
Northbound APIs:

Northbound APIs enable communication between the SDN controller and the applications or orchestration
systems. These APIs provide a means for applications to request network services and provide policy
requirements to the controller. They abstract the underlying complexity of the network infrastructure,
enabling applications to interact with the network in a programmable and policy-driven manner.

Southbound APIs:

Southbound APIs enable communication between the SDN controller and the network devices at the data
plane. These APIs facilitate the exchange of information such as network topology, traffic statistics, and
forwarding rules. Protocols such as OpenFlow, NETCONF, and YANG are commonly used for
communication between the controller and network devices.

Overall, SDN architecture provides a flexible and agile framework for network management and operation.
By separating the control plane from the data plane and introducing centralized control and programmability,
SDN enables dynamic network provisioning, efficient resource utilization, rapid service deployment, and
simplified network management, paving the way for the next generation of networking technologies.

1.4 & 1.5 SDN Data Plane ,Control plane and Application Plane

1.4 SDN Data Plane

While the Control Plane supervises and directs, the Data Plane is responsible for the actual movement of data
from one system to another. It is the workhorse that delivers data to end users from systems and vice versa.
Some examples of data planes include:
● Ethernet networks
● Wi-Fi networks
● Cellular networks
● Satellite communications
Data planes can also include virtualized networks, like those created using virtual private networks (VPNs)
or software-defined networks (SDNs). Additionally, data planes can include dedicated networks, like the
Internet of Things (IoT) or industrial control systems.
Data planes allow organizations to quickly and securely transfer data between systems. For example, a data
plane can enable the transfer of data between a cloud-based application and a local system. This functionality
can be beneficial for organizations that need to access data from multiple systems or that need to quickly
transfer large amounts of data.

By using dedicated networks, organizations can keep data secure through encryption, dedicated networks,
and access monitoring to prevent unauthorized access of data.

16
1.5 SDN Control Plane

The Control Plane is a crucial component of a network, tasked with making decisions on how data should be
managed, routed, and processed. It acts as a supervisor of data, coordinating communication between
different components and collecting data from the Data Plane.

Control Planes utilize various protocols, such as:


● Routing protocols (like BGP, OSPF, and IS-IS)
● Network management protocols (SNMP)
● Application layer protocols (HTTP and FTP)
These protocols often employ software-defined networking (SDN) to create virtual networks and manage
their traffic. Virtual networks, facilitated by SDN, are instrumental in managing data traffic at an enterprise
level. They enable organizations to:
● Segment traffic
● Prioritize important data flows
● Isolate traffic from different parts of the network

1.5.1 Data Plane vs. Control Plane: What Are the Key Differences?

The main differences between control and data planes are their purpose and how they communicate between
different systems. The control plane decides how data is managed, routed, and processed, while the data
plane is responsible for the actual moving of data. For example, the control plane decides how packets
should be routed, and the data plane carries out those instructions by forwarding the packets.
Along with doing different jobs, control planes and data planes exist in different areas. While the control
plane runs in the cloud, the data plane runs in the data processing area.
They also use different functions to do their jobs. Control planes use protocols to communicate between
different systems, mostly common routing protocols like BGP, OSPF, and IS-IS or network management
protocols like SNMP. These protocols enable the control plane to make decisions on how data should be
managed, routed, and processed.
Data planes use dedicated networks to communicate between different systems. Examples of dedicated
networks used in data planes include Ethernet and Wi-Fi networks, cellular networks, satellite
communications, virtualized networks, and dedicated networks used in industrial control systems or IoT.
These networks enable the data plane to deliver data to end users from systems and vice versa.
While both the Control Plane and Data Plane are integral to network management, they perform distinct
roles. The table below outlines some of the key differences between the two:

17
Control Plane Data Plane

Determines how data should be managed, routed, Responsible for moving packets from source to
and processed destination

Builds and maintains the IP routing table Forwards actual IP packets based on the
Control Plane’s logic

Packets are processed by the router to update the Forwards packets based on the built logic of the
routing table Control Plane

1.5.2 Software-Defined Networking (SDN) Application

Software-defined networking (SDN) application is a software program which is designed to perform a task in
a software-defined networking environment. It is that approach to computer networking that not only allows
network administrators to change programmatically, control, initialize, and manage network behaviour
dynamically through open interfaces but also provides the concept of lower-level functionality. SDN

18
applications also help in enlarging and substituting upon functions that are accomplished in the hardware
devices of a regular network through firmware.

1.5.2.1 SDN application environment

Internal SDN Applications

Applications that are hosting the rest of the OpenDaylight controller software and are deployed internally,
run inside the container. These applications must be written in the native language which is Java for ODL.
Internal SDN applications must also adhere to the execution and design constraints of the controller. It must
also execute in the same Java Machine as the controller which means that these types of the application must
run locally with the controller. It can also access the MD-SAL applications and Java APIs of the controller
running inside the controller’s OSGi container.

External SDN Applications

Applications that are hosting the rest of the Open Daylight controller software, and are deployed externally,
run outside the container. Any language can be used for writing External SDN applications that are scripting
languages such as Bash. These applications can be run remotely which means on a different host than the
controller. These applications will also use the application providing Restful access to their services and
REST API provided by the controller.

19
Top Application and Service that can benefit from SDN are:

Security services

The modern virtualization ecosystem supports specific virtual service that is running within the network
layer. It means an incorporating function like NFV into SDN platforms. This type of network security creates
a genuinely proactive environment that is capable of risk reduction and responds to the incidents very
quickly. Whenever a violation occurs, every second is quite critical to stop the attack. It is also essential to
identify the attack and also to ensure that other network components are safe from the attack. As the
organization in the modern era becomes even more digitized, and as the network layer becomes even more
critical, we will see even more attacks and more advanced sophisticated advanced persistent threats. You will
be able to create a more proactive environment that is capable of responding to the changes if you integrate
potent services into the SDN layer.

Network Monitoring and Intelligence

Modern SDN technologies help in abstracting one of the most critical layers within the data centre that is the
network. Network architectures are very much complicated and have to handle a lot more data than ever
before. This means it’s critical to know what is following through your environment. Do you have remission
issues on a port? What will happen if you are running heterogeneous network architecture? Or, are you
passing a lot of traffic and are heavily virtualized through the network architecture? All of these challenges
or issues are diminished if you have a solid network monitoring and intelligence layer. However, you also
gain benefit and true insight if you integrate these technologies into your SDN architecture. Even
optimization, alerting, hypervisor integration, port configurations, and traffic flow can be incorporated into
network monitoring and intelligence technologies. Also, these types of agile systems will also help you to
monitor network traffic between your cloud ecosystem and your data centre.

Bandwidth Management

With the help of SDN applications, operators can use bandwidth management to ensure the end users to
receive online video watching and optimal browsing experiences. This SDN application can also monitor the
bandwidth requirements then provision user flows to match the latency and bandwidth requirements of the
layer 7 application. This type of application-aware approach to bandwidth management will also ensure a
better user experience with zero buffering through better video playback. At this stage in the game, there is
little doubt that SDN is becoming a reality in operator networks.

20
However, it is the SDN applications that will really bring powerful improvements to the operator’s business
and networks, beyond the immediate impact of simpler management of the network. And so the network
infrastructure providers need to start mapping out this future to calculate all the potential that can be
provided by SDN.

By acting and thinking ahead on SDN applications now, network infrastructure operators and providers will
be able to rapidly evolve to provide flexible, customized networks that can entirely enhance their own
bottom lines and can also enhance the end user experience.

Content Availability

There will be content servers used for media delivery or caching, on a service-provider edge network. These
are installed by the content delivery network or operator service providers. Content that is to be served to the
users is distributed and preoccupied across multiple content servers and also across various geographies in
some cases.

SDN apps will be able to provision flows in the network based on the availability and types of the content
which is built to handle content availability. SDN applications can also check the availability of the content
in the content servers before routing requests to servers. A content-routing application will provide
intelligence on its availability along with enabling discovery of content in the content servers.

This intelligence can be further used to route requests to the correct servers wherever the content is residing.
Therefore, SDN application will direct requests from those websites which are non-cache-able and that
generate active content to a server that provides active content rather than a caching server which
significantly reduces network discontinuation.

21
Regulation and Compliance-Bound Applications

Major cloud vendors are now providing the capability to work and store with compliance and regulation-
bound workloads. Now organizations have the option to extend architectures which have initially been very
limited because of regulation into the cloud and distributed environments. How can you segment the traffic?
How can you ensure that regulation and compliance workloads are persistently monitored and secured? Here
SDN can be a great help for you.

Network points, network traffic travelling between switches, and even hypervisors can be controlled in SDN
architecture. You should also remember that this layer abstracts virtual hardware and functions controls. This
powerful layer can then span various virtualization points, locations, and even cloud locations.

High –Performance Applications

We are all seeing a rise in new types of application technologies. The delivery of rich apps like graphics
design software, engineering, CAD, and GIS is allowed by virtualization. Traditionally, these workloads are
required bare-metal architectures with their own connections. However, with the help of virtualization, VDI
can help in creating powerful desktop experiences and applications are streamed. We can also see the
integration of SDN into application control at the network layer. All of these functions like segmenting heavy
traffic, securing confidential data, creating powerful QoS policies, and even creating threshold alerts around
bottlenecks within SDN will help to support rich and high-performance applications which are being
delivered through virtualization.

Distributed Application Control and Cloud Integration

The capability to extend across the entire data centre is one of the most significant benefits of SDN. This
type of agility integrates distributed cloud, locations and the organization as a whole. SDN also allows for
critical network traffic to pass between various locations irrespective of the type of underlying network
architecture. You also permit easier movement of data between cloud locations and data centre by abstracting
critical network controls. You can utilise powerful APIs to not only integrate with a cloud provider, but you
can also control specific network services as well because SDN is a form of network virtualization. While
keeping your business agile, this allows you to manage your workloads granularly.

22
SDN applications are being used by organizations for a lot of the functions; however, these were few of the
main features to consider. You should understand how SDN applications can positively impact your
business and your data centre. SDN fundamentally simplifies the entire networking layer and provides you
granular control over your distributed data centre ecosystem, services, and applications.

Also, SDN helps you to design a business capable of adjusting to changes in the industry and market shifts.
This also allows your organization to be truly productive and agile.

UNIT - 2 SDN DATA PLANE AND CONTROL PLANE

2.1 Data Plane functions and protocols

2.1.1 SDN Data Plane

The SDN data plane, referred to as the resource layer in ITU-T Y.3300 and also often referred to as the
infrastructure layer, is where network forwarding devices perform the transport and processing of data
according to decisions made by the SDN control plane. The important characteristic of the network devices
in an SDN network is that these devices perform a simple forwarding function, without embedded software
to make autonomous decisions.

23
2.1.2 Data Plane Functions

Figure 4.2 illustrates the functions performed by the data plane network devices (also called data plane
network elements or switches). The principal functions of the network device are the following:

FIGURE 4.2 Data Plane Network Device

Control support function: Interacts with the SDN control layer to support programmability via resource-
control interfaces. The switch communicates with the controller and the controller manages the switch via
the OpenFlow switch protocol.

Data forwarding function: Accepts incoming data flows from other network devices and end systems and
forwards them along the data forwarding paths that have been computed and established according to the
rules defined by the SDN applications.

These forwarding rules used by the network device are embodied in forwarding tables that indicate for given
categories of packets what the next hop in the route should be. In addition to simple forwarding of a packet,
the network device can alter the packet header before forwarding, or discard the packet. As shown, arriving
packets may be placed in an input queue, awaiting processing by the network device, and forwarded packets
are generally placed in an output queue, awaiting transmission.

The network device in Figure 4.2 is shown with three I/O ports: one providing control communication with
an SDN controller, and two for the input and output of data packets. This is a simple example. The network

24
device may have multiple ports to communicate with multiple SDN controllers, and may have more than two
I/O ports for packet flows into and out of the device.

2.1.3 Data Plane Protocols

Figure 4.2 suggests the protocols supported by the network device. Data packet flows consist of streams of
IP packets. It may be necessary for the forwarding table to define entries based on fields in upper-level
protocol headers, such as TCP, UDP, or some other transport or application protocol. The network device
examines the IP header and possibly other headers in each packet and makes a forwarding decision.

The other important flow of traffic is via the southbound application programming interface (API),
consisting of OpenFlow protocol data units (PDUs) or some similar southbound API protocol traffic.

2.2 OpenFlow Protocol

The OpenFlow protocol describes message exchanges that take place between an OpenFlow controller and
an OpenFlow switch. Typically, the protocol is implemented on top of TLS, providing a secure OpenFlow
channel.

25
The OpenFlow protocol enables the controller to perform add, update, and delete actions to the flow entries
in the flow tables. It supports three types of messages (see Table 4.2):

26
Controller to switch: These messages are initiated by the controller and, in some cases, require a response
from the switch. This class of messages enables the controller to manage the logical state of the switch,
including its configuration and details of flow and group table entries. Also included in this class is the
Packet-out message. This message is sent by the controller to a switch when that switch sends a packet to the
controller and the controller decides not to drop the packet but to direct it to a switch output port.
Asynchronous: These types of messages are sent without solicitation from the controller. This class
includes various status messages to the controller. Also included is the Packet-in message, which may be
used by the switch to send a packet to the controller when there is no flow table match.
Symmetric: These messages are sent without solicitation from either the controller or the switch. They are
simple yet helpful. Hello messages are typically sent back and forth between the controller and switch when
the connection is first established. Echo request and reply messages can be used by either the switch or
controller to measure the latency or bandwidth of a controller-switch connection or just verify that the device
is up and running. The Experimenter message is used to stage features to be built in to future versions of
OpenFlow.
In general terms, the OpenFlow protocol provides the SDN controller with three types of information to be
used in managing the network:
Event-based messages: Sent by the switch to the controller when a link or port change occurs.
Flow statistics: Generated by the switch based on traffic flow. This information enables the controller to
monitor traffic, reconfigure the network as needed, and adjust flow parameters to meet QoS requirements.
Encapsulated packets: Sent by the switch to the controller either because there is an explicit action to
send this packet in a flow table entry or because the switch needs information for establishing a new flow.
The OpenFlow protocol enables the controller to manage the logical structure of a switch, without regard to
the details of how the switch implements the OpenFlow logical architecture.

2.3 Flow table

In Software-Defined Networking (SDN), the "Flow Table" plays a crucial role in the data plane of network
devices, particularly in switches. The Flow Table is where rules for packet forwarding are stored and
processed. Let's delve into the specifics:

27
1. Functionality: The Flow Table is a fundamental component of SDN switches. It's akin to a database
where rules, known as flow entries, are stored. Each flow entry consists of match fields and corresponding
actions.

2. Match Fields: These fields define the characteristics of packets that the switch will examine to determine
whether they match a particular flow entry. Common match fields include source and destination addresses,
ports, VLAN tags, and packet header information (e.g., IP protocol, TCP/UDP ports).

3. Actions: Once a packet matches a flow entry, the switch executes specific actions associated with that
entry. Actions can include forwarding the packet out a particular port, dropping the packet, modifying packet
headers, or sending the packet to the controller for further processing.

4. Priority and Wildcard Entries: Flow entries in the table have priorities assigned to them. When a packet
matches multiple flow entries, the entry with the highest priority is selected. Additionally, wildcard entries
can match multiple packets based on common criteria, simplifying rule management.

5. Flow Table Lookup: When a packet arrives at the switch, it is compared against the flow entries in the
table using the match fields. This process is known as a flow table lookup. If a match is found, the
corresponding actions are executed. If no match is found (a table miss), the packet is often forwarded to the
controller for further handling.

6. Flow Table Management: The SDN controller is responsible for managing the flow table entries. It can
dynamically add, modify, or remove entries based on network conditions, policies, or events. This dynamic
control allows for flexible and programmable packet forwarding behavior.

7. Flow Table Capacity: The capacity of the flow table varies depending on the capabilities of the switch
hardware and the SDN controller's software. Larger capacity allows for more complex forwarding behavior
and support for a greater number of concurrent flows.

8. Flow Table Aging and Eviction: Flow entries may have a limited lifetime, after which they are removed
from the table. This process, known as aging, helps manage resource usage and ensures that the flow table
remains up-to-date. Entries may also be evicted to make room for new entries when the table reaches its
capacity.

9. Performance Considerations: Efficient flow table lookup is crucial for maintaining network
performance. Switches employ various techniques, such as caching and hardware acceleration, to optimize
lookup speed and reduce latency.

28
10. Security and Policy Enforcement: The flow table is a central point for enforcing network security
policies. By carefully configuring flow entries, administrators can control traffic flows, implement access
control policies, and mitigate security threats.

In summary, the Flow Table is a critical component of SDN switches, facilitating flexible and programmable
packet forwarding based on predefined rules. Its management and optimization are key considerations for
achieving efficient and secure network operation in SDN environments.

2.3.2 Flow Table Structure

29
Match fields: Used to select packets that match the values in the fields.

Priority: Relative priority of table entries. This is a 16-bit field with 0 corresponding to the lowest priority.
In principle, there could be 216 = 64k priority levels.
Counters: Updated for matching packets. The OpenFlow specification defines a variety of counters. Table
4.1 lists the counters that must be supported by an OpenFlow switch.

Action: Instructions to be performed if a match occurs.


Timeouts: Maximum amount of idle time before a flow is expired by the switch. Each flow entry has an
idle_timeout and a hard_timeout associated with it. A nonzero hard_timeout field causes the flow entry to be
removed after the given number of seconds, regardless of how many packets it has matched. A nonzero
idle_timeout field causes the flow entry to be removed when it has matched no packets in the given number
of seconds.
Match Fields Component
The match fields component of a table entry consists of the following required fields (see part b of Figure
4.5):
Ingress port: The identifier of the port on this switch on which the packet arrived. This may be a physical
port or a switch-defined virtual port. Required in ingress tables.
Egress port: The identifier of the egress port from action set. Required in egress tables.
Ethernet source and destination addresses: Each entry can be an exact address, a bitmasked value for
which only some of the address bits are checked, or a wildcard value (match any value).
Ethernet type field: Indicates type of the Ethernet packet payload.
IP: Version 4 or 6.
IPv4 or IPv6 source address, and destination address: Each entry can be an exact address, a bitmasked
value, a subnet mask value, or a wildcard value.
TCP source and destination ports: Exact match or wildcard value.

30
UDP source and destination ports: Exact match or wildcard value.
The preceding match fields must be supported by any OpenFlow-compliant switch. The following fields may
be optionally supported.
Physical port: Used to designate underlying physical port when packet is received on a logical port.
Metadata: Additional information that can be passed from one table to another during the processing of a
packet. Its use is discussed subsequently.
VLAN ID and VLAN user priority: Fields in the IEEE 802.1Q virtual LAN header. SDN support for
VLANs is discussed in Chapter 8, “NFV Functionality.”
IPv4 or IPv6 DS and ECN: Differentiated Services and Explicit Congestion Notification fields.
SCTP source and destination ports: Exact match or wildcard value for Stream Transmission Control
Protocol.
ICMP type and code fields: Exact match or wildcard value.
ARP opcode: Exact match in Ethernet Type field.
Source and target IPv4 addresses in ARP payload: Can be an exact address, a bitmasked value, a subnet
mask value, or a wildcard value.
IPv6 flow label: Exact match or wildcard.
ICMPv6 type and code fields: Exact match or wildcard value.
IPv6 neighbor discovery target address: In an IPv6 Neighbor Discovery message.
IPv6 neighbor discovery source and target addresses: Link-layer address options in an IPv6 Neighbor
Discovery message.
MPLS label value, traffic class, and BoS: Fields in the top label of an MPLS label stack.
Provider bridge traffic ISID: Service instance identifier.
Tunnel ID: Metadata associated with a logical port.
TCP flags: Flag bits in the TCP header. May be used to detect start and end of TCP connections.
IPv6 extension: Extension header.
Thus, OpenFlow can be used with network traffic involving a variety of protocols and network services.
Note that at the MAC/link layer, only Ethernet is supported. Therefore, OpenFlow as currently defined
cannot control Layer 2 traffic over wireless networks.
Each of the fields in the match fields component either has a specific value or a wildcard value, which
matches any value in the corresponding packet header field. A flow table may include a table-miss flow
entry, which wildcards all match fields (every field is a match regardless of value) and has the lowest
priority.
We can now offer a more precise definition of the term flow. From the point of view of an individual switch,
a flow is a sequence of packets that matches a specific entry in a flow table. The definition is packet oriented,
in the sense that it is a function of the values of header fields of the packets that constitute the flow, and not a
function of the path they follow through the network. A combination of flow entries on multiple switches
defines a flow that is bound to a specific path.

31
2.3.3 Flow Table Pipeline

A switch includes one or more flow tables. If there is more than one flow table, they are organized as a
pipeline, with the tables labeled with increasing numbers starting with zero. The use of multiple tables in a
pipeline, rather than a single flow table, provides the SDN controller with considerable flexibility.

The OpenFlow specification defines two stages of processing:


Ingress processing: Ingress processing always happens, beginning with Table 0, and uses the identity of
the input port. Table 0 may be the only table, in which case the ingress processing is simplified to the
processing performed on that single table, and there is no egress processing.
Egress processing: Egress processing is the processing that happens after the determination of the output
port. It happens in the context of the output port. This stage is optional. If it occurs, it may involve one or
more tables. The separation of the two stages is indicated by the numerical identifier of the first egress table.
All tables with a number lower than the first egress table must be used as ingress tables, and no table with a
number higher than or equal to the first egress table can be used as an ingress table.
Pipeline processing always starts with ingress processing at the first flow table; the packet must be first
matched against flow entries of flow Table 0. Other ingress flow tables may be used depending on the
outcome of the match in the first table. If the outcome of ingress processing is to forward the packet to an
output port, the OpenFlow switch may perform egress processing in the context of that output port.
When a packet is presented to a table for matching, the input consists of the packet, the identity of the ingress
port, the associated metadata value, and the associated action set. For Table 0, the metadata value is blank
and the action set is null. At each table, processing proceeds as follows (see Figure 4.6):

32
FIGURE 4.6 Simplified Flowchart Detailing Packet Flow Through an OpenFlow Switch

33
1. If there is a match on one or more entries, other than the table-miss entry, the match is defined to be with
the highest-priority matching entry. As mentioned in the preceding discussion, the priority is a component of
a table entry and is set via OpenFlow; the priority is determined by the user or application invoking
OpenFlow. The following steps may then be performed:

a. Update any counters associated with this entry.


b. Execute any instructions associated with this entry. This may include updating the action set, updating the
metadata value, and performing actions.
c. The packet is then forwarded to a flow table further down the pipeline, to the group table, to the meter
table, or directed to an output port.
2. If there is a match only on a table-miss entry, the table entry may contain instructions, as with any other
entry. In practice, the table-miss entry specifies one of three actions:
a. Send packet to controller. This will enable the controller to define a new flow for this and similar packets,
or decide to drop the packet.
b. Direct packet to another flow table further down the pipeline.
c. Drop the packet.
3. If there is no match on any entry and there is no table-miss entry, the packet is dropped.
For the final table in the pipeline, forwarding to another flow table is not an option. If and when a packet is
finally directed to an output port, the accumulated action set is executed and then the packet is queued for
output. Figure 4.7 illustrates the overall ingress pipeline process.

34
FIGURE 4.7 Packet Flow Through an OpenFlow Switch: Ingress Processing

If egress processing is associated with a particular output port, then after a packet is directed to an output
port at the completion of the ingress processing, the packet is directed to the first flow table of the egress
pipeline. Egress pipeline processing proceeds in the same fashion as for ingress processing, except that there
is no group table processing at the end of the egress pipeline. Egress processing is shown in Figure 4.8.

35
FIGURE 4.8 Packet Flow Through OpenFlow Switch: Egress Processing

2.4 Control Plane Functions

Figure 5.2 illustrates the functions performed by SDN controllers. The figure illustrates the essential
functions that any controller should provide, as suggested in a paper by Kreutz [KREU15], which include the
following:

36
FIGURE 5.2 SDN Control Plane Functions and Interfaces
Shortest path forwarding: Uses routing information collected from switches to establish preferred routes.
Notification manager: Receives, processes, and forwards to an application events, such as alarm
notifications, security alarms, and state changes.
Security mechanisms: Provides isolation and security enforcement between applications and services.
Topology manager: Builds and maintains switch interconnection topology information.
Statistics manager: Collects data on traffic through the switches.
Device manager: Configures switch parameters and attributes and manages flow tables.
The functionality provided by the SDN controller can be viewed as a network operating system (NOS). As
with a conventional OS, an NOS provides essential services, common application programming interfaces
(APIs), and an abstraction of lower-layer elements to developers. The functions of an SDN NOS, such as
those in the preceding list, enable developers to define network policies and manage networks without
concern for the details of the network device characteristics, which may be heterogeneous and dynamic. The
northbound interface, discussed subsequently, provides a uniform means for application developers and
network managers to access SDN service and perform network management tasks. Further, well-defined
northbound interfaces enable developers to create software that is independent not only of data plane details
but to a great extent usable with a variety of SDN controller servers.
A number of different initiatives, both commercial and open source, have resulted in SDN controller
implementations.
The following list describes a few prominent ones:
OpenDaylight: An open source platform for network programmability to enable SDN, written in Java.
OpenDaylight was founded by Cisco and IBM, and its membership is heavily weighted toward network
vendors. OpenDaylight can be implemented as a single centralized controller, but enables controllers to be
distributed where one or multiple instances may run on one or more clustered servers in the network.

37
Open Network Operating System (ONOS): An open source SDN NOS, initially released in 2014. It is a
nonprofit effort funded and developed by a number of carriers, such as AT&T and NTT, and other service
providers. Significantly, ONOS is supported by the Open Networking Foundation, making it likely that
ONOS will be a major factor in SDN deployment. ONOS is designed to be used as a distributed controller
and provides abstractions for partitioning and distributing network state onto multiple distributed controllers.
POX: An open source OpenFlow controller that has been implemented by a number of SDN developers
and engineers. POX has a well written API and documentation. It also provides a web-based graphical user
interface (GUI) and is written in Python, which typically shortens its experimental and developmental cycles
compared to some other implementation languages, such as C++.
Beacon: An open source package developed at Stanford. Written in Java and highly integrated into the
Eclipse integrated development environment (IDE). Beacon was the first controller that made it possible for
beginner programmers to work with and create a working SDN environment.
Floodlight: An open source package developed by Big Switch Networks. Although its beginning was
based on Beacon, it was built using Apache Ant, which is a very popular software build tool that makes the
development of Floodlight easier and more flexible. Floodlight has an active community and has a large
number of features that can be added to create a system that best meets the requirements of a specific
organization. Both a web-based and Java-based GUI are available and most of its functionality is exposed
through a REST API.
Ryu: An open source component-based SDN framework developed by NTT Labs. It is open sourced and
fully developed in python.
Onix: Another distributed controller, jointly developed by VMWare, Google, and NTT. Onix is a
commercially available SDN controller.

2.5 Southbound Interface

The southbound interface provides the logical connection between the SDN controller and the data plane
switches (see Figure 5.3). Some controller products and configurations support only a single southbound
protocol. A more flexible approach is the use of a southbound abstraction layer that provides a common
interface for the control plane functions while supporting multiple southbound APIs.

38
FIGURE 5.3 SDN Controller Interfaces

The most commonly implemented southbound API is OpenFlow, covered in some detail in Chapter 4, “SDN
Data Plane and OpenFlow.” Other southbound interfaces include the following:
Open vSwitch Database Management Protocol (OVSDB): Open vSwitch (OVS) an open source
software project which implements virtual switching that is interoperable with almost all popular
hypervisors. OVS uses OpenFlow for message forwarding in the control plane for both virtual and physical
ports. OVSDB is the protocol used to manage and configure OVS instances.
Forwarding and Control Element Separation (ForCES): An IETF effort that standardizes the interface
between the control plane and the data plane for IP routers.
Protocol Oblivious Forwarding (POF): This is advertised as an enhancement to OpenFlow that
simplifies the logic in the data plane to a very generic forwarding element that need not understand the
protocol data unit (PDU) format in terms of fields at various protocol levels. Rather, matching is done by
means of (offset, length) blocks within a packet. Intelligence about packet format resides at the control plane
level.

39
2.6 Northbound Interface

The northbound interface enables applications to access control plane functions and services without needing
to know the details of the underlying network switches. The northbound interface is more typically viewed as
a software API rather than a protocol.
Unlike the southbound and eastbound/westbound interfaces, where a number of heterogeneous interfaces
have been defined, there is no widely accepted standard for the northbound interface. The result has been that
a number of unique APIs have been developed for various controllers, complicating the effort to develop
SDN applications. To address this issue the Open Networking Foundation formed the Northbound Interface
Working Group (NBI-WG) in 2013, with the objective of defining and standardizing a number of broadly
useful northbound APIs. As of this writing, the working group has not issued any standards.
A useful insight of the NBI-WG is that even in an individual SDN controller instance, APIs are needed at
different “latitudes.” That is, some APIs may be “further north” than others, and access to one, several, or all
of these different APIs could be a requirement for a given application.
Figure 5.4, from the NBI-WG charter document (October 2013), illustrates the concept of multiple API
latitudes. For example, an application may need one or more APIs that directly expose the functionality of
the controller, to manage a network domain, and use APIs that invoke analytic or reporting services residing
on the controller.

FIGURE 5.4 Latitude of Northbound Interfaces

40
Figure 5.5 shows a simplified example of an architecture with multiple levels of northbound APIs, the levels
of which are described in the list that follows.

FIGURE 5.5 SDN Controller APIs

Base controller function APIs: These APIs expose the basic functions of the controller and are used by
developers to create network services.
Network service APIs: These APIs expose network services to the north.
Northbound interface application APIs: These APIs expose application-related services that are built on
top of network services.

2.7 SDN Controller

SDN (Software Defined Networking) controllers are a critical component in SDN architecture. They act as
the brain of the network, managing and controlling the flow of data traffic within the network. SDN
controllers separate the control plane from the data plane, enabling centralized management and
programmability of network devices.

2.7.1 Types of SDN Controllers:

41
1. Open Source Controllers: These are SDN controllers that are developed and maintained by open-source
communities. Examples include OpenDaylight and ONOS (Open Network Operating System).

2. Vendor-specific Controllers: These controllers are developed and provided by specific networking
vendors. Examples include Cisco's Application Centric Infrastructure (ACI) controller and VMware's NSX
controller.

2.7.2 Advantages of SDN Controllers:

1. Centralized Management: SDN controllers provide a centralized point of control for the entire
network, allowing administrators to configure and manage network devices from a single interface.

2. Programmability: SDN controllers enable network programmability, allowing administrators to


automate network configurations and implement policies through software-defined policies rather
than manual configurations on individual devices.

3. Dynamic Network Control: SDN controllers facilitate dynamic network control by adjusting
network configurations in real-time based on changing traffic patterns and network conditions.

4. Reduced Hardware Dependency: With SDN controllers, the network intelligence is centralized in
software, reducing the dependency on specialized, proprietary hardware and enabling the use of
commodity hardware.

2.7.3 Disadvantages of SDN Controllers:

1. Single Point of Failure: Since SDN controllers are centralized, they can become a single point of
failure. If the controller fails, the entire network may be affected.

2. Security Concerns: Centralizing network control introduces security risks, as compromising the
SDN controller could potentially compromise the entire network. Robust security measures must be
implemented to mitigate these risks.

3. Complexity: Implementing SDN controllers and integrating them with existing network
infrastructure can be complex and require significant expertise. Organizations may face challenges in
transitioning to SDN, especially if they have legacy systems in place.

4. Vendor Lock-in: Using vendor-specific SDN controllers may lead to vendor lock-in, limiting
flexibility and interoperability with devices from other vendors.

Despite these challenges, the advantages of SDN controllers in terms of centralized management,
programmability, and dynamic network control make them a compelling choice for modern network
architectures seeking flexibility, scalability, and efficiency.

42
2.8 Distributed Controllers

A key architectural design decision is whether a single centralized controller or a distributed set of
controllers will be used to control the data plane switches. A centralized controller is a single server that
manages all the data plane switches in the network.

In a large enterprise network, the deployment of a single controller to manage all network devices would
prove unwieldy or undesirable. A more likely scenario is that the operator of a large enterprise or carrier
network divides the whole network into a number of nonoverlapping SDN domains, also called SDN islands

(Figure 5.10), managed by distributed controllers. Reasons for using SDN domains include those in the
list that follows.

FIGURE 5.10 SDN Domain Structure

Scalability: The number of devices an SDN controller can feasibly manage is limited. Therefore, a
reasonably large network may need to deploy multiple SDN controllers.

Reliability: The use of multiple controllers avoids the risk of a single point of failure.

Privacy: A carrier may choose to implement different privacy policies in different SDN domains. For
example, an SDN domain may be dedicated to a set of customers who implement their own highly
customized privacy policies, requiring that some networking information in this domain (for example,
network topology) should not be disclosed to an external entity.

Incremental deployment: A carrier’s network may consist of portions of legacy and nonlegacy
infrastructure. Dividing the network into multiple individually manageable SDN domains allows for flexible
incremental deployment.

43
Distributed controllers may be collocated in a small area, or widely dispersed, or a combination of the two.
Closely placed controllers offer high throughput and are appropriate for data centers, whereas dispersed
controllers accommodate multilocation networks.

Typically, controllers are distributed horizontally. That is, each controller governs a nonoverlapping subset of
the data plane switches. A vertical architecture is also possible, in which control tasks are distributed to
different controllers depending on criteria such as network view and locality requirements.

In a distributed architecture, a protocol is needed for communication among the controllers. In principle, a
proprietary protocol could be used for this purpose, although an open or standard protocol would clearly be
preferable for purposes of interoperability.

The functions associated with the east/westbound interface for a distributed architecture include maintaining
either a partitioned or replicated database of network topology and parameters, and monitoring/notification
functions. The latter function includes checking whether a controller is alive and coordinating changes in
assignment of switches to controllers.

2.8.1 High-Availability Clusters

Within a single domain, the controller function can be implemented on a high-availability (HA) cluster.
Typically, there would be two or more nodes that share a single IP address that is used by external systems
(both north and southbound) to access the cluster. An example is the IBM SDN for Virtual Environments
product, which uses two nodes. Each node is considered a peer of the other node in the cluster for data
replication and sharing of the external IP address. When HA is running, the primary node is responsible for
answering all traffic that is sent to the cluster’s external IP address and holds a read/write copy of the
configuration data. Meanwhile, the second node operates as a standby, with a read-only copy of the
configuration data, which is kept current with the primary’s copy. The secondary node monitors the state of
the external IP. If the secondary node determines that the primary node is no longer answering the external
IP, it triggers a failover, changing its mode to that of primary node. It assumes the responsibility for
answering the external IP and changes its copy of configuration data to be read/write. If the old primary
reestablishes connectivity, there is an automatic recovery process trigger to convert the old primary to
secondary status so that configuration changes that are made during the failover period are not lost.

ODL Helium has HA built in, and Cisco XNC and the Open Network controller have HA features (up to five
in a cluster).

2.8.2 Federated SDN Networks

The distributed SDN architecture discussed in the preceding paragraphs refers to a system of SDN domains
that are all part of a single enterprise network. The domains may be collocated or on separate sites. In either

44
case, the management of all the data plane switches is under the control of a single network management
function.

It is also possible for SDN networks that are owned and managed by different organizations to cooperate
using east/westbound protocols. Figure 5.11 is an example of the potential for inter-SDN controller
cooperation.

FIGURE 5.11 Federation of SDN Controllers [GUPT14]

In this configuration, we have a number of service subscribers to a data center network providing cloud-
based services. Typically, as was illustrated previously in Figure 1.3, subscribers are connected to the service
network through a hierarchy of access, distribution, and core networks. These intermediate networks may all
be operated by the data center network, or they may involve other organizations. In the latter case, if all the
networks implement SDN, they need to share common conventions for share control plane parameters, such
as quality of service (QoS), policy information, and routing information.

2.8.3 Border Gateway Protocol

Before proceeding further with our discussion, it will be useful to provide an overview of the Border
Gateway Protocol (BGP). BGP was developed for use in conjunction with internets that use the TCP/IP suite,
although the concepts are applicable to any internet. BGP has become the preferred exterior router protocol
(ERP) for the Internet.

BGP enables routers, called gateways in the standard, in different autonomous systems to cooperate in the
exchange of routing information. The protocol operates in terms of messages, which are sent over TCP
connections. The current version of BGP is known as BGP-4.

Three functional procedures are involved in BGP:

Neighbor acquisition

45
Neighbor reachability

Network reachability

Two routers are considered to be neighbors if they are attached to the same network or communication link.
If they are attached to the same network, communication between the neighbor routers might require a path
through other routers within the shared network. If the two routers are in different autonomous systems, they
may want to exchange routing information. For this purpose, it is necessary first to perform neighbor
acquisition. The term neighbor refers to two routers that share the same network. In essence, neighbor
acquisition occurs when two neighboring routers in different autonomous systems agree to exchange routing
information regularly. A formal acquisition procedure is needed because one of the routers may not want to
participate. For example, the router may be overburdened and may not want to be responsible for traffic
coming in from outside the AS. In the neighbor acquisition process, one router sends a request message to
the other, which may either accept or refuse the offer. The protocol does not address the issue of how one
router knows the address or even the existence of another router, nor how it decides that it needs to exchange
routing information with that particular router. These issues must be dealt with at configuration time or by
active intervention of a network manager.

To perform neighbor acquisition, one router sends an Open message to another. If the target router accepts
the request, it returns a Keepalive message in response.

Once a neighbor relationship is established, the neighbor reachability procedure is used to maintain the
relationship. Each partner needs to be assured that the other partner still exists and is still engaged in the
neighbor relationship. For this purpose, the two routers periodically issue Keepalive messages to each other.

The final procedure specified by BGP is network reachability. Each router maintains a database of the
networks that it can reach and the preferred route for reaching each network. Whenever a change is made to
this database, the router issues an Update message that is broadcast to all other routers for which it has a
neighbor relationship. Because the Update message is broadcast, all BGP routers can build up and maintain
their routing information.

2.8.4 Routing and QoS Between Domains

For routing outside a controller’s domain, the controller establishes a BGP connection with each neighboring
router. Figure 5.12 illustrates a configuration with two SDN domains that are linked only through a non-SDN
AS.

46
FIGURE 5.12 Heterogeneous Autonomous Systems with OpenFlow and Non-OpenFlow Domains

Within the non-SDN AS, OSPF is used for interior routing. OSPF is not needed in an SDN domain; rather,
the necessary routing information is reported from each data plane switch to the centralized controller using
a southbound protocol (in this case, OpenFlow). Between each SDN domain and the AS, BGP is used to
exchange information, such as the following:

Reachability update: Exchange of reachability information facilitates inter-SDN domain routing. This
allows a single flow to traverse multiple SDNs and each controller can select the most appropriate path in the
network.

Flow setup, tear-down, and update requests: Controllers coordinate flow setup requests, which contain
information such as path requirements, QoS, and so on, across multiple SDN domains.

Capability Update: Controllers exchange information on network-related capabilities such as bandwidth,


QoS and so on, in addition to system and software capabilities available inside the domain.

Several additional points are worth observing with respect to Figure 5.12:

47
The figure depicts each AS as a cloud containing interconnected routers and, in the case of an SDN
domain, a controller. The cloud represents an internet, so that the connection between any two routers is a
network within the internet. Similarly, the connection between two adjacent autonomous systems is a
network, which may be part of one of the two adjacent autonomous systems, or a separate network.

For an SDN domain, the BGP function is implemented in the SDN controller rather than a data plane
router. This is because the controller is responsible for managing the topology and making routing decisions.

The figure shows a BGP connection between autonomous systems 1 and 3. It may be that these networks
are not directly connected by a single network. However, if the two SDN domains are part of a single SDN
system, or if they are federated, it may be desirable to exchange additional SDN-related information.

2.8.5 Using BGP for QoS Management

A common practice for inter-AS interconnection is a best-effort interconnection only. That is, traffic
forwarding between autonomous systems is without traffic class differentiation and without any forwarding
guarantee. It is common for network providers to reset any IP packet traffic class markings to zero, the best-
effort marking, at the AS ingress router, which eliminates any traffic differentiation. Some providers perform
higher-layer classification at the ingress to guess the forwarding requirements and to match on their AS
internal QoS forwarding policy. There is no standardized set of classes, no standardized marking (class
encoding), and no standardized forwarding behavior, that cross-domain traffic could rely on. However RFC
4594 (Configuration Guidelines for DiffServ Service Classes, August 2006) provides a set of “best practices”
related to this parameters. QoS policy decisions are taken by network providers independently and in an
uncoordinated fashion. This general statement does not cover existing individual agreements, which do offer
quality-based interconnection with strict QoS guarantees. However, such service level agreement (SLA)-
based agreements are of bilateral or multilateral nature and do not offer a means for a general “better than
best effort” interconnection.

IETF is currently at work on a standardized scheme for QoS marking using BGP (BGP Extended
Community for QoS Marking, draft-knoll-idr-qos-attribute-12, July 10, 2015). Meanwhile, SDN providers
have implemented their own capabilities using the extensible nature of BGP. In either case, the interaction
between SDN controllers in different domains using BGP would involve the steps illustrated in Figure 5.13
and described in the list that follows.

48
FIGURE 5.13 East-West Connection Establishment, Route, and Flow Setup

1. The SDN controller must be configured with BGP capability and with information about the location of
neighboring BGP entices.

2. BGP is triggered by a start or activation event within the controller.

3. The BGP entity in the controller attempts to establish a TCP connection with each neighboring BGP entity.

4. Once a TCP connection is established, the controller’s BGP entity exchanges Open messages with the
neighbor. Capability information is exchanged with using the Open messages.

5. The exchange completes with the establishment of a BGP connection.

6. Update messages are used to exchange NLRI (network layer reachability information), indicating what
networks are reachable via this entity. Reachability information is used in the selection of the most
appropriate data path between SDN controllers. Information obtained through NLRI parameter is used to
update the controller’s Routing Information Base (RIB). This in turn enables the controller to set the
appropriate flow information in the data plane switches.

7. The Update message can also be used to exchange QoS information, such as available capacity.

49
8. Route selection is done when more than one path is available based on BGP process decision. Once the
path is established packets can traverse successfully between two SDN domains.

Unit – 3 SDN APPLICATIONS

3.1 SDN Application Plane Architecture

The application plane contains applications and services that define, monitor, and control network resources
and behavior. These applications interact with the SDN control plane via application-control interfaces, for
the SDN control layer to automatically customize the behavior and the properties of network resources. The
programming of an SDN application makes use of the abstracted view of network resources provided by the
SDN control layer by means of information and data models exposed via the application-control interface.

This section provides an overview of application plane functionality, depicted in Figure 6.1. The elements in
this figure are analyzed through a bottom-up approach, and subsequent sections provide detail on specific
application areas.

FIGURE 6.1 SDN Application Plane Functions and Interfaces

50
3.1.1 Northbound Interface

The northbound interface enables applications to access control plane functions and services without needing
to know the details of the underlying network switches. Typically, the northbound interface provides an
abstract view of network resources controlled by the software in the SDN control plane.

Figure 6.1 indicates that the northbound interface can be a local or remote interface. For a local interface, the
SDN applications are running on the same server as the control plane software (controller network operating
system). Alternatively, the applications could be run on remote systems and the northbound interface is a
protocol or application programming interface (API) that connects the applications to the controller network
operating system (NOS) running on central server. Both architectures are likely to be implemented.

An example of a northbound interface is the REST API for the Ryu SDN network operating system,
described in Section 5.4.

3.1.2 Network Services Abstraction Layer

RFC 7426 defines a network services abstraction layer between the control and application planes and
describes it as a layer that provides service abstractions that can be used by applications and services. Several
functional concepts are suggested by the placement of this layer in the SDN architecture:

This layer could provide an abstract view of network resources that hides the details of the underlying data
plane devices.

This layer could provide a generalized view of control plane functionality, so that applications could be
written that would operate across a range of controller network operating systems.

This functionality is similar to that of a hypervisor or virtual machine monitor that decouples applications
from the underlying OS and underlying hardware.

This layer could provide a network virtualization capability that allows different views of the underlying
data plane infrastructure.

Arguably, the network services abstraction layer could be considered to be part of the northbound interface,
with the functionality incorporated in the control plane or the application plane.

A wide range of schemes have been developed that roughly fall into this layer, and a full treatment is beyond
our scope. Section 6.2 provides several examples for a better understanding.

51
3.1.2.1 Network Applications

There are many network applications that could be implemented for an SDN. Different published surveys of
SDN have come up with different lists and even different general categories of SDN-based network
applications. Figure 6.1 includes six categories that encompass the majority of SDN applications.

3.1.2.2 User Interface

The user interface enables a user to configure parameters in SDN applications and to interact with
applications that support user interaction. Again, there are two possible interfaces. A user that is collocated
with the SDN application server (which may or may not include the control plane) can use the server’s
keyboard/display. More typically, the user would log on to the application server over a network or
communications facility.

3.2 Network Services Abstraction Layer

In the context of the discussion, abstraction refers to the amount detail about lower levels of the model that
is visible to higher levels. More abstraction means less detail; less abstraction means more detail. An
abstraction layer is a mechanism that translates a high-level request into the low-level commands required
to perform the request. An API is one such mechanism. It shields the implementation details of a lower level
of abstraction from software at a higher level. A network abstraction represents the basic properties or
characteristics of network entities (such as switches, links, ports, and flows) is such a way that network
programs can focus on the desired functionality without having to program the detailed actions.

52
FIGURE 6.2 SDN Architecture and Abstractions

Lets, imagine you have a big box of LEGO blocks, and you want to build something cool with them. But
instead of just building one thing, you want to make lots of different things, like a castle, a spaceship, or a
car. To help you do that, you need some rules and tools to organize how you use the LEGO blocks.

Now, let's compare this to computer networks, where instead of LEGO blocks, we have lots of devices like
computers, routers, and switches. These devices need to work together to send data from one place to
another, just like how you need to put LEGO pieces together to build something.

In the world of Software Defined Networking (SDN), there are three main concepts: Network Service
Abstraction Layer, Distribution Abstraction, and Forwarding Abstraction. Let's break them down:

53
1. Network Service Abstraction Layer (NSAL):
Think of NSAL like the instruction manual for building with LEGO blocks. It's a set of rules and tools that
helps us control and manage the network. Just like how the manual tells you how to build different LEGO
creations, NSAL helps us control how data moves through the network. It simplifies complex tasks like
setting up connections or managing security.

2. Distribution Abstraction:
Now, imagine you have a huge LEGO city with different neighborhoods. Each neighborhood has its own
set of LEGO blocks and buildings. In networking, distribution abstraction is like dividing the network into
different neighborhoods or zones. This helps organize how data is handled in different parts of the network.
It's like saying, "These devices over here will handle data going to this part of the network, and those devices
over there will handle data going to another part."

3. Forwarding Abstraction:
Lastly, think of forwarding abstraction as the roads and paths in your LEGO city. Just like how roads help
LEGO people move around, forwarding abstraction helps data packets move through the network. It's like
creating a map that tells the network devices where to send data based on its destination. This way, data can
travel efficiently from one place to another.

So, to sum it up:


- NSAL is like the instruction manual for managing the network.
- Distribution abstraction is like dividing the network into different zones.
- Forwarding abstraction is like creating paths for data to travel through the network.

These concepts help make SDN more flexible, easier to manage, and more efficient, just like how rules and
tools help you build amazing things with your LEGO blocks!

Frenetic

An example of a network services abstraction layer is the programming language Frenetic. Frenetic enables
networks operators to program the network as a whole instead of manually configuring individual network
elements. Frenetic was designed to solve challenges with the use of OpenFlow-based models by working
with an abstraction at the network level as opposed to OpenFlow, which directly goes down to the network
element level.

54
Frenetic includes an embedded query language that provides effective abstractions for reading network state.
This language is similar to SQL and includes segments for selecting, filtering, splitting, merging and
aggregating the streams of packets. Another special feature of this language is that it enables the queries to be
composed with forwarding policies. A compiler produces the control messages needed to query and tabulate
the counters on switches.

Frenetic consists of two levels of abstraction, as illustrated in Figure 6.4. The upper level, which is the
Frenetic source-level API, provides a set of operators for manipulating streams of network traffic. The query
language provides means for reading the state of the network, merging different queries, and expressing
high-level predicates for classifying, filtering, transforming, and aggregating the packet streams traversing
the network. The lower level of abstraction is provided by a run-time system that operates in the SDN
controller. It translates high-level policies and queries into low-level flow rules and then issues the needed
OpenFlow commands to install these rules on the switches.

FIGURE 6.4 Frenetic Architecture

3.3 Traffic Engineering

Traffic engineering is a method for dynamically analyzing, regulating, and predicting the behavior of data
flowing in networks with the aim of performance optimization to meet service level agreements (SLAs).
Traffic engineering involves establishing routing and forwarding policies based on QoS requirements. With
SDN, the task of traffic engineering should be considerably simplified compared with a non-SDN network.

55
SDN offers a uniform global view of heterogeneous equipment and powerful tools for configuring and
managing network switches.

This is an area of great activity in the development of SDN applications. The SDN survey paper by Kreutz in
the January 2015 Proceedings of the IEEE [KREU15] lists the following traffic engineering functions that
have been implemented as SDN applications:

On-demand virtual private networks

Load balancing

Energy-aware routing

Quality of service (QoS) for broadband access networks

Scheduling/optimization

Traffic engineering with minimal overhead

Dynamic QoS routing for multimedia apps

Fast recovery through fast-failover groups

QoS policy management framework

QoS enforcement

QoS over heterogeneous networks

Multiple packet schedulers

Queue management for QoS enforcement

Divide and spread forwarding tables

3.3.1 PolicyCop

An instructive example of a traffic engineering SDN application is PolicyCop [BARI13], which is an


automated QoS policy enforcement framework. It leverages the programmability offered by SDN and
OpenFlow for

Dynamic traffic steering

Flexible Flow level control

Dynamic traffic classes

56
Custom flow aggregation levels

Key features of PolicyCop are that it monitors the network to detect policy violations (based on a QoS SLA)
and reconfigures the network to reinforce the violated policy.

As shown in Figure 6.5, PolicyCop consists of eleven software modules and two databases, installed in both
the application plane and the control plane. PolicyCop uses the control plane of SDNs to monitor the
compliance with QoS policies and can automatically adjust the control plane rules and flow tables in the data
plane based on the dynamic network traffic statistics.

FIGURE 6.5 PolicyCop Architecture

In the control plane, PolicyCop relies on four modules and a database for storing control rules, described as
follows:

57
Admission Control: Accepts or rejects requests from the resource provisioning module for reserving
network resources, such as queues, flow-table entries, and capacity.

Routing: Determines path availability based on the control rules in the rule database.

Device Tracker: Tracks the up/down status of network switches and their ports.

Statistics Collection: Uses a mix of passive and active monitoring techniques to measure different
network metrics.

Rule Database: The application plane translates high-level network-wide policies to control rules and
stores them in the rule database.

A RESTful northbound interface connects these control plane modules to the application plane modules,
which are organized into two components: a policy validator that monitors the network to detect policy
violations, and a policy enforcer that adapts control plane rules based on network conditions and high-level
policies. Both modules rely on a policy database, which contains QoS policy rules entered by a network
manager. The modules are as follows:

Traffic Monitor: Collects the active policies from policy database, and determines appropriate monitoring
interval, network segments, and metrics to be monitored.

Policy Checker: Checks for policy violations, using input from the policy database and the Traffic
Monitor.

Event Handler: Examines violation events and, depending on event type, either automatically invokes the
policy enforcer or sends an action request to the network manager.

Topology Manager: Maintains a global view of the network, based on input from the device tracker.

Resource Manager: Keeps track of currently allocated resources using admission control and statistics
collection.

Policy Adaptation: Consists of a set of actions, one for each type of policy violation. Table 6.1 shows the
general functionality of some of the policy adaptation actions. The actions are pluggable components that
can be specified by the network manager.

58
TABLE 6.1 Functionality of Some Example Policy Adaptation Actions (PAAs)

Resource Provisioning: This module either allocates more resources or releases existing ones or both
based on the violation event.

Figure 6.6 shows the process workflow in PolicyCop.

59
FIGURE 6.6 PolicyCop Workflow

3.4 Measurement and Monitoring

In software-defined networking (SDN), measurement and monitoring play crucial roles in ensuring the
network's performance, security, and overall health. Here's how they are defined and utilized in SDN:

60
1. Measurement: Measurement in SDN involves the collection of various network-related data to
understand the behavior, performance, and usage patterns of the network elements. This data can include
metrics such as bandwidth utilization, packet loss, latency, and traffic patterns. Measurements are typically
gathered from different points within the SDN architecture, including switches, routers, controllers, and end
hosts.

• Flow Measurement: SDN controllers often measure and track flow statistics, such as the number of
packets, bytes, and duration of each flow traversing the network. This information helps in traffic
engineering, quality of service (QoS) enforcement, and troubleshooting.

• Resource Utilization: Measurement also involves monitoring the utilization of network resources
such as CPU, memory, and link bandwidth. By tracking resource usage, SDN controllers can make
informed decisions about resource allocation and optimization.

• Security Monitoring: Measurement extends to security aspects as well, where SDN controllers may
collect data on network anomalies, intrusion attempts, and suspicious traffic patterns for detecting
and mitigating cyber threats.

2. Monitoring: Monitoring in SDN refers to the continuous observation and analysis of network
performance and behavior in real-time or near real-time. Monitoring systems in SDN architectures typically
rely on data collected through measurement processes.

• Real-time Monitoring: SDN controllers and monitoring tools continuously collect and analyze
network data to detect anomalies, performance degradation, or security breaches as they occur. Real-
time monitoring enables rapid response to network events and helps maintain network reliability and
security.

• Historical Analysis: Monitoring systems also store historical data for trend analysis, capacity
planning, and performance optimization. By analyzing historical network behavior, administrators

61
can identify long-term patterns, forecast future demands, and optimize network resources
accordingly.

• Visualization and Reporting: Monitoring tools often provide visualization interfaces and reporting
functionalities to present network data in a comprehensible format. Graphs, charts, and dashboards
allow administrators to quickly assess network health, identify performance bottlenecks, and track
key performance indicators (KPIs).

Measurement and monitoring are fundamental aspects of SDN management, providing administrators with
valuable insights into network operation, performance, and security posture. By leveraging measurement
data and real-time monitoring capabilities, SDN controllers can dynamically adapt network configurations,
optimize resource allocation, and ensure efficient and secure network operation.

3.5 Security

Applications in this area have one of two goals:

Address security concerns related to the use of SDN: SDN involves a three-layer architecture
(application, control, data) and new approaches to distributed control and encapsulating data. All of this
introduces the potential for new vectors for attack. Threats can occur at any of the three layers or in the
communication between layers. SDN applications are needed to provide for the secure use of SDN itself.

Use the functionality of SDN to improve network security: Although SDN presents new security
challenges for network designers and managers, it also provides a platform for implementing consistent,
centrally managed security policies and mechanisms for the network. SDN allows the development of SDN
security controllers and SDN security applications that can provision and orchestrate security services and
mechanisms.

This section provides an example of an SDN security application the illustrates the second goal. We examine
the topic of SDN security in detail in Chapter 16, “Security.”

3.5.1 OpenDaylight DDoS Application

In 2014, Radware, a provider of application delivery and application security solutions for virtual and cloud
data centers, announced its contribution to the OpenDaylight Project with Defense4All, an open SDN
security application integrated into OpenDaylight. Defense4All offers carriers and cloud providers

62
distributed denial of service (DDoS) detection and mitigation as a native network service. Using the
OpenDaylight SDN Controller that programs SDN-enabled networks to become part of the DoS/DDoS
protection service itself, Defense4All enables operators to provision a DoS/DDoS protection service per
virtual network segment or per customer.

Defense4All uses a common technique for defending against DDoS attacks, which consists of the following
elements:

Collection of traffic statistics and learning of statistics behavior of protected objects during peacetime. The
normal traffic baselines of the protected objects are built from these collected statistics.

Detection of DDoS attack patterns as traffic anomalies deviating from normal baselines.

Diversion of suspicious traffic from its normal path to attack mitigation systems (AMSs) for traffic
scrubbing, selective source blockage, and so on. Clean traffic exiting out of scrubbing centers is re-injected
back into the packet’s original destination.

Figure 6.7 shows the overall context of the Defense4All application. The underlying SDN network consists
of a number of data plane switches that support traffic among client and server devices. Defense4All
operates as an application that interacts with the controller over an OpenDaylight controller (ODC)
northbound API. Defense4All supports a user interface for network managers that can either be a command
line interface or a RESTful API. Finally, Defense4All has an API to communicate with one or more AMSs.

63
FIGURE 6.7 OpenDaylight DDoS Application

Administrators can configure Defense4All to protect certain networks and servers, known as protected
networks (PNs) and protected objects (POs). The application instructs the controller to install traffic counting
flows for each protocol of each configured PO in every network location through which traffic of the subject
PO flows.

Defense4All then monitors traffic of all configured POs, summarizing readings, rates, and averages from all
relevant network locations. If it detects a deviation from normal learned traffic behavior in a protocol (such
as TCP, UDP, ICMP, or the rest of the traffic) of a particular PO, Defense4All declares an attack against that
protocol in the subject PO. Specifically, Defese4All continuously calculates traffic averages for real time
traffic it measured using OpenFlow; when real time traffic deviates by 80% from average then an attack is
assumed.

To mitigate a detected attack, Defense4All performs the following procedure:

1. It validates that the AMS device is alive and selects a live connection to it. Currently, Defense4All is
configured to work with Radware’s AMS, known as DefensePro.

64
2. It configures the AMS with a security policy and normal rates of the attacked traffic. This provides the
AMS with the information needed to enforce a mitigation policy until traffic returns to normal rates.

3. It starts monitoring and logging syslogs arriving from the AMS for the subject traffic. As long as
Defense4All continues receiving syslog attack notifications from the AMS regarding this attack, Defense4All
continues to divert traffic to the AMS, even if the flow counters for this PO do not indicate any more attacks.

4. It maps the selected physical AMS connection to the relevant PO link. This typically involves changing
link definitions on a virtual network, using OpenFlow.

5. It installs higher-priority flow table entries so that the attack traffic flow is redirected to the AMS and re-
injects traffic from the AMS back to the normal traffic flow route. When Defense4All decides that the attack
is over (no attack indication from either flow table counters or from the AMS), it reverts the previous actions:
It stops monitoring for syslogs about the subject traffic, it removes the traffic diversion flow table entries,
and it removes the security configuration from the AMS. Defense4All then returns to peacetime monitoring.

Figure 6.8 shows the principal software components of Defense4All. The overall application structure,
referred to as a framework, contains the modules described in the list that follows.

65
FIGURE 6.8 Defense4All Software Architecture Detail

Web (REST) Server: Interface to network manager.

Framework Main: Mechanism to start, stop, or reset the framework.

Framework REST Service: Responds to user requests received through the web (REST) server.

Framework Management Point: Coordinates and invokes control and configuration commands.

66
Defense4All Application: Described subsequently.

Common Classes and Utilities: A library of convenient classes and utilities from which any framework or
SDN application module can benefit.

Repository Services: One of the key elements in the framework philosophy is decoupling the compute
state from the compute logic. All durable states are stored in a set of repositories that can be then replicated,
cached, and distributed, with no awareness of the compute logic (framework or application).

Logging and Flight Recorder Services: The logging service uses logs error, warning, trace, or
informational messages. These logs are mainly for Defense4All developers. The Flight Recorder records
events and metrics during run time from Java applications.

Health Tracker: Holds aggregated run-time indicators of the operational health of Defense4All and acts
in response to severe functional or performance deteriorations.

Cluster Manager: Responsible for managing coordination with other Defense4All entities operating in a
cluster mode.

The Defense4All Application module consists of the following elements.

DF App Root: The root module of the application.

DF Rest Service: Responds to Defense4All application REST requests.

DF Management Point: The point to drive control and configuration commands. DFMgmtPoint in turn
invokes methods against other relevant modules in the right order.

ODL Reps: A pluggable module set for different versions of the ODC. Comprises two functions in two
submodules: stats collection for and traffic diversion of relevant traffic.

SDN Stats Collector: Responsible for setting “counters” for every PN at specified network locations
(physical or logical). A counter is a set of OpenFlow flow entries in ODC-enabled network switches and
routers. The module periodically collects statistics from those counters and feeds them to the
SDNBasedDetectionMgr. The module uses the SDNStatsCollectionRep to both set the counters and read
latest statistics from those counters. A stat report consists of read time, counter specification, PN label, and a
list of trafficData information, where each trafficData element contains the latest bytes and packet values for
flow entries configured for <protocol,port,direction> in the counter location. The protocol can be
{tcp,udp,icmp,other ip}, the port is any Layer 4 port, and the direction can be {inbound, outbound}.

SDN Based Detection Manager: A container for pluggable SDN-based detectors. It feeds stat reports
received from the SDNStatsCollector to plugged-in SDN based detectors. It also feeds all SDN based

67
detectors notifications from the AttackDecisionPoint about ended attacks (so as to allow reset of detection
mechanisms). Each detector learns for each PN its normal traffic behavior over time, and notifies
AttackDecisionPoint when it detects traffic anomalies.

Attack Decision Point: Responsible for maintaining attack lifecycle, from declaring a new attack, to
terminating diversion when an attack is considered over.

Mitigation Manager: A container for pluggable mitigation drivers. It maintains the lifecycle of each
mitigation being executed by an AMS. Each mitigation driver is responsible for driving attack mitigations
using AMSs in their sphere of management.

AMS Based Detector: This module is responsible for monitoring/querying attack mitigation by AMSs.

AMS Rep: Controls the interface to AMSs.

Figure 6.8 suggests the complexity of even a relatively straightforward SDN application.

Finally, it is worth noting that Radware has developed a commercial version of Defese4All, named
DefenseFlow. DefenseFlow implements more sophisticated algorithms for attack detection based on fuzzy
logic. The main benefit is that DefenseFlow has a greater ability to distinguish attack traffic from abnormal
but legitimate high volume of traffic.

3.6 Data Center Networking

So far we’ve discussed three areas of SDN applications: traffic engineering, measurement and monitoring,
and security. The provided examples of these applications suggest the broad range of use cases for them, in
many different kinds of networks. The remaining three applications areas (data center networking, mobility
and wireless, and information-centric networking) have use cases in specific types of networks.

Cloud computing, big data, large enterprise networks, and even in many cases, smaller enterprise networks,
depend strongly on highly scalable and efficient data centers. [KREU15] lists the following as key
requirements for data centers: high and flexible cross-section bandwidth and low latency, QoS based on the
application requirements, high levels of resilience, intelligent resource utilization to reduce energy
consumption and improve overall efficiency, and agility in provisioning network resources (for example, by
means of network virtualization and orchestration with computing and storage).

With traditional network architectures, many of these requirements are difficult to satisfy because of the
complexity and inflexibility of the network. SDN offers the promise of substantial improvement in the ability
to rapidly modify data center network configurations, to flexibly respond to user needs, and to ensure
efficient operation of the network.

The remainder of this subsection, examines two example data center SDN applications.

68
3.6.1 Big Data over SDN

A paper by Wang, et al., in the Proceedings of HotSDN’12 [WANG12], reports on an approach to use SDN
to optimize data center networking for big data applications. The approach leverages the capabilities of SDN
to provide application-aware networking. It also exploits characteristics of structured big data applications as
well as recent trends in dynamically reconfigurable optical circuits. With respect to structured big data
applications, many of these applications process data according to well-defined computation patterns, and
also have a centralized management structure that makes it possible to leverage application-level information
to optimize the network. That is, knowing the anticipated computation patterns of the big data application, it
is possible to intelligently deploy the data across the big data servers and, more significantly, react to
changing application patterns by using SDN to reconfigure flows in the network.

Compared to electronic switches, optical switches have the advantages of greater data rates with reduced
cabling complexity and energy consumption. A number of projects have demonstrated how to collect
network-level traffic data and intelligently allocate optical circuits between endpoints (for example, top-of-
rack switches) to improve application performance. However, circuit utilization and application performance
can be inadequate unless there is a true application-level view of traffic demands and dependencies.
Combining an understanding of the big data computation patterns with the dynamic capabilities of SDN,
efficient data center networking configurations can be used to support the increasing big data demands.

Figure 6.9 shows a simple hybrid electrical and optical data center network, in which OpenFlow-enabled top-
of-rack (ToR) switches are connected to two aggregation switches: an Ethernet switch and an optical circuit
switch (OCS). All the switches are controlled by a SDN controller that manages physical connectivity among
ToR switches over optical circuits by configuring the optical switch. It can also manage the forwarding at
ToR switches using OpenFlow rules.

FIGURE 6.9 Integrated Network Control for Big Data Applications [WANG12]

The SDN controller is also connected to the Hadoop scheduler, which forms queues of jobs to be scheduled
and the HBase primary controller of a relational database holding data for the big data applications. In

69
addition, the SDN controller connects to a Mesos cluster manager. Mesos is an open source software package
that provides scheduling and resource allocation services across distributed applications.

The SDN controller makes available network topology and traffic information to the Mesos cluster manager.
In turn, the SDN controller accepts traffic demand request from Mesos managers.

With the organization of Figure 6.8, it is possible to set up a scheme whereby the traffic demands of big data
applications are used to dynamically manage the network, using the SDN controller to manage this task.

3.6.2 Cloud Networking over SDN

Cloud Network as a Service (CloudNaaS) is a cloud networking system that exploits OpenFlow SDN
capabilities to provide a greater degree of control over cloud network functions by the cloud customer
[BENS11]. CloudNaaS enables users to deploy applications that include a number of network functions, such
as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various
middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using
high-speed programmable network elements, making CloudNaaS highly efficient.

Figure 6.10 illustrates the principal sequence of events in the CloudNaaS operation, as described in the list
that follows.

70
FIGURE 6.10 Various Steps in the CloudNaaS Framework

a. A cloud customer uses a simple policy language to specify network services required by the customer
applications. These policy statements are issued to a cloud controller server operated by the cloud service
provider.

b. The cloud controller maps the network policy into a communication matrix that defines desired
communication patterns and network services. The matrix is used to determine the optimal placement of
virtual machines (VMs) on cloud servers such that the cloud can satisfy the largest number of global policies
in an efficient manner. This is done based on the knowledge of other customers’ requirements and their
current levels of activity.

c. The logical communication matrix is translated into network-level directives for data plane forwarding
elements. The customer’s VM instances are deployed by creating and placing the specified number of VMs.

d. The network-level directives are installed into the network devices via OpenFlow.

71
The abstract network model seen by the customer consists of VMs and virtual network segments that connect
VMs together. Policy language constructs identify the set of VMs that comprise an application and define
various functions and capabilities attached to virtual network segments. The main constructs are as follows:

address: Specify a customer-visible custom address for a VM.

group: Create a logical group of one or more VMs. Grouping VMs with similar functions makes it
possible for modifications to apply across the entire group without requiring changing the service attached to
individual VMs.

middlebox: Name and initialize a new virtual middlebox by specifying its type and a configuration file.
The list of available middleboxes and their configuration syntax is supplied by the cloud provider. Examples
include intrusion detection and audit compliance systems.

networkservice: Specify capabilities to attach to a virtual network segment, such as Layer 2 broadcast
domain, link QoS, and list of middleboxes that must be traversed.

virtualnet: Virtual network segments connect groups of VMs and are associated with network services. A
virtual network can span one or two groups. With a single group, the service applies to traffic between all
pairs of VMs in the group. With a pair of groups, the service is applied between any VM in the first group
and any VM in the second group. Virtual networks can also connect to some predefined groups, such as
EXTERNAL, which indicates all endpoints outside of the cloud.

Figure 6.11 provides an overview of the architecture of CloudNaaS. Its two main components are a cloud
controller and a network controller. The cloud controller provides a base Infrastructure as a Service (IaaS)
service for managing VM instances. The user can communicate standard IaaS requests, such as setting up
VMs and storage. In addition, the network policy constructs enable the user to define the virtual network
capabilities for the VMs. The cloud controller manages a software programmable virtual switch on each
physical server in the cloud that supports network services for tenant applications, including the management
of the user-defined virtual network segments. The cloud controller constructs the communication matrix and
transmits this to the network controller.

72
FIGURE 6.11 CloudNaaS Architecture

The network controller uses the communication matrix to configure data plane physical and virtual switches.
It generates virtual networks between VMs and provides VM placement directives to the cloud controller. It
monitors the traffic and performance on the cloud data plane switches and makes changes to the network
state as needed to optimize use of resources to meet tenant requirements. The controller invokes the
placement optimizer to determine the best location to place VMs within the cloud (and reports it to the cloud
controller for provisioning). The controller then uses the network provisioner module to generate the set of
configuration commands for each of the programmable devices in the network and configures them
accordingly to instantiate the tenant’s virtual network segment.

73
Thus, CloudNaaS provides the cloud customer with the ability to go beyond simple requesting a processing
and storage resource, to defining a virtual network of VMs and controlling the service and QoS requirements
of the virtual network.

UNIT – 4 NETWORK FUNCTION VIRTUALIZATION

4.1 Network Virtualization

Network virtualization is the process of combining hardware and software network resources and
functionality into a single, software-based administrative entity, often referred to as a virtual
network. This allows for the creation of multiple virtual networks that operate independently of each
other, utilizing the same underlying physical network infrastructure.

Here's a brief overview of its components, uses, advantages, and disadvantages:

4.1.1 Components:

1. Hypervisor: This software layer manages the virtualization of hardware resources and facilitates
the creation and management of virtual networks.

2. Virtual Switches: These are software-based switches that enable communication between virtual
machines (VMs) within the virtual network.

3. Network Overlay: This technology creates virtual networks on top of existing physical networks,
allowing for segmentation and isolation.

4.1.2 Uses:

1. Data Center Networking: Network virtualization enables efficient resource utilization in data
centers by creating virtual networks that can be dynamically configured and managed.

2. Cloud Computing: Virtual networks allow cloud service providers to offer customizable and
isolated network environments to their customers.

74
3. Software-Defined Networking (SDN): Network virtualization is a key component of SDN
architectures, providing flexibility and programmability to network management.

4.1.3 Advantages:

1. Resource Optimization: Network virtualization enables better utilization of physical network


resources by allowing multiple virtual networks to share them.

2. Scalability: Virtual networks can be easily scaled up or down to accommodate changing demands
without significant changes to the underlying physical infrastructure.

3. Isolation: Virtual networks provide isolation between different network environments, enhancing
security and privacy.

4. Flexibility: Network virtualization allows for the creation of customized network environments
tailored to specific needs without the constraints of physical hardware.

4.1.4 Disadvantages:

1. Performance Overhead: Virtualization introduces overhead, which can impact network


performance, especially in high-throughput environments.

2. Complexity: Managing virtual networks and coordinating communication between virtual and
physical networks can be complex and require specialized skills.

3. Vendor Lock-In: Adopting network virtualization technologies from a specific vendor may result
in vendor lock-in, limiting interoperability with other systems.

4. Security Concerns: Virtualization introduces new attack vectors and security challenges that need
to be addressed to ensure the integrity and confidentiality of network traffic.

4.2 Virtual LANs

Figure 9.1 shows a relatively common type of hierarchical LAN configuration. In this example, the devices
on the LAN are organized into four segments, each served by a LAN switch. The LAN switch is a store-and-
forward packet-forwarding device used to interconnect a number of end systems to form a LAN segment.
The switch can forward a media access control (MAC) frame from a source-attached device to a

75
destination-attached device. It can also broadcast a frame from a source-attached device to all other attached
devices. Multiples switches can be interconnected so that multiple LAN segments form a larger LAN. A
LAN switch can also connect to a transmission link or a router or other network device to provide
connectivity to the Internet or other WANs.

FIGURE 9.1 A LAN Configuration

76
Traditionally, a LAN switch operated exclusively at the MAC level. Contemporary LAN switches generally
provide greater functionality, including multilayer awareness (Layers 3, 4, application), quality of service
(QoS) support, and trunking for wide-area networking.

The three lower groups in Figure 9.1 might correspond to different departments, which are physically
separated, and the upper group could correspond to a centralized server farm that is used by all the
departments.

Consider the transmission of a single MAC frame from workstation X. Suppose the destination MAC
address in the frame is workstation Y. This frame is transmitted from X to the local switch, which then
directs the frame along the link to Y. If X transmits a frame addressed to Z or W, its local switch forwards the
MAC frame through the appropriate switches to the intended destination. All these are examples of unicast
addressing, in which the destination address in the MAC frame designates a unique destination. A MAC
frame may also contain a broadcast address, in which case the destination MAC address indicates that all
devices on the LAN should receive a copy of the frame. Thus, if X transmits a frame with a broadcast
destination address, all the devices on all the switches in Figure 9.1 receive a copy of the frame. The total
collection of devices that receive broadcast frames from each other is referred to as a broadcast domain.

In many situations, a broadcast frame is used for a purpose, such as network management or the transmission
of some type of alert, with a relatively local significance. Thus, in Figure 9.1, if a broadcast frame has
information that is useful only to a particular department, transmission capacity is wasted on the other
portions of the LAN and on the other switches.

One simple approach to improving efficiency is to physically partition the LAN into separate broadcast
domains, as shown in Figure 9.2. We now have four separate LANs connected by a router. In this case, a
broadcast frame from X is transmitted only to the other devices directly connected to the same switch as X.
An IP packet from X intended for Z is handled as follows. The IP layer at X determines that the next hop to
the destination is via router V. This information is handed down to X’s MAC layer, which prepares a MAC
frame with a destination MAC address of router V. When V receives the frame, it strips off the MAC header,
determines the destination, and encapsulates the IP packet in a MAC frame with a destination MAC address
of Z. This frame is then sent to the appropriate Ethernet switch for delivery.

77
FIGURE 9.2 A Partitioned LAN

The drawback to this approach is that the traffic pattern may not correspond to the physical distribution of
devices. For example, some departmental workstations may generate a lot of traffic with one of the central
servers. Further, as the networks expand, more routers are needed to separate users into broadcast domains
and provide connectivity among broadcast domains. Routers introduce more latency than switches because
the router must process more of the packet to determine destinations and route the data to the appropriate end
node.

78
4.2.1 The Use of Virtual LANs

A more effective alternative is the creation of VLANs. In essence, a virtual local-area network (VLAN) is a
logical subgroup within a LAN that is created by software rather than by physically moving and separating
devices. It combines user stations and network devices into a single broadcast domain regardless of the
physical LAN segment they are attached to and allows traffic to flow more efficiently within populations of
mutual interest. The VLAN logic is implemented in LAN switches and functions at the MAC layer. Because
the objective is to isolate traffic within the VLAN, a router is required to link from one VLAN to another.
Routers can be implemented as separate devices, so that traffic from one VLAN to another is directed to a
router, or the router logic can be implemented as part of the LAN switch, as shown in Figure 9.3.

79
FIGURE 9.3 A VLAN Configuration

VLANs enable any organization to be physically dispersed throughout the company while maintaining its
group identity. For example, accounting personnel can be located on the shop floor, in the research and
development center, in the cash disbursement office, and in the corporate offices, while all members reside
on the same virtual network, sharing traffic only with each other.

Figure 9.3 shows five defined VLANs. A transmission from workstation X to server Z is within the same
VLAN, so it is efficiently switched at the MAC level. A broadcast MAC frame from X is transmitted to all

80
devices in all portions of the same VLAN. But a transmission from X to printer Y goes from one VLAN to
another. Accordingly, router logic at the IP level is required to move the IP packet from X to Y. Figure 9.3
shows that logic integrated into the switch, so that the switch determines whether the incoming MAC frame
is destined for another device on the same VLAN. If not, the switch routes the enclosed IP packet at the IP
level.

4.2.2 Defining VLANs

A VLAN is a broadcast domain consisting of a group of end stations, perhaps on multiple physical LAN
segments, that are not constrained by their physical location and can communicate as if they were on a
common LAN. Some means is therefore needed for defining VLAN membership. A number of different
approaches have been used for defining membership, including the following:

Membership by port group: Each switch in the LAN configuration contains two types of ports: a trunk
port, which connects two switches; and an end port, which connects the switch to an end system. A VLAN
can be defined by assigning each end port to a specific VLAN. This approach has the advantage that it is
relatively easy to configure. The principle disadvantage is that the network manager must reconfigure VLAN
membership when an end system moves from one port to another.

Membership by MAC address: Because MAC layer addresses are hardwired into the workstation’s
network interface card (NIC), VLANs based on MAC addresses enable network managers to move a
workstation to a different physical location on the network and have that workstation automatically retain its
VLAN membership. The main problem with this method is that VLAN membership must be assigned
initially. In networks with thousands of users, this is no easy task. Also, in environments where notebook PCs
are used, the MAC address is associated with the docking station and not with the notebook PC.
Consequently, when a notebook PC is moved to a different docking station, its VLAN membership must be
reconfigured.

Membership based on protocol information: VLAN membership can be assigned based on IP address,
transport protocol information, or even higher-layer protocol information. This is a quite flexible approach,
but it does require switches to examine portions of the MAC frame above the MAC layer, which may have a
performance impact.

4.2.3 Nested VLANs

The original 802.1Q specification allowed for a single VLAN tag field to be inserted into an Ethernet MAC
frame. More recent versions of the standard allow for the insertion of two VLAN tag fields, allowing the
definition of multiple sub-VLANs within a single VLAN. This additional flexibility might be useful in some
complex configurations.

81
For example, a single VLAN level suffices for an Ethernet configuration entirely on a single premises.
However, it is not uncommon for an enterprise to make use of a network service provider to interconnect
multiple LAN locations, and to use metropolitan area Ethernet links to connect to the provider. Multiple
customers of the service provider may wish to use the 802.1Q tagging facility across the service provider
network (SPN).

One possible approach is for the customer’s VLANs to be visible to the service provider. In that case, the
service provider could support a total of only 4094 VLANs for all its customers. Instead, the service provider
inserts a second VLAN tag into Ethernet frames. For example, consider two customers with multiple sites,
both of which use the same SPN (see part a of Figure 9.6). Customer A has configured VLANs 1 to 100 at
their sites, and similarly Customer B has configured VLANs 1 to 50 at their sites. The tagged data frames
belonging to the customers must be kept separate while they traverse the service provider’s network. The
customer’s data frame can be identified and kept separate by associating another VLAN for that customer’s
traffic. This results in the tagged customer data frame being tagged again with a VLAN tag, when it traverses
the SPN (see part b of Figure 9.6). The additional tag is removed at the edge of the SPN when the data enters
the customer’s network again. Packed VLAN tagging is known as VLAN stacking or as Q-in-Q.

82
FIGURE 9.6 Use of Stacked VLAN Tags

4.3 OpenFlow VLAN Support

OpenFlow VLAN, also known as OpenFlow-enabled VLAN, refers to the integration of OpenFlow
technology with Virtual Local Area Networks (VLANs) in network environments. This integration allows for
the dynamic control and management of VLANs using the OpenFlow protocol.

OpenFlow VLAN support refers to the capability of the OpenFlow protocol to manage Virtual Local Area
Networks (VLANs) within a network infrastructure. To understand this concept better, let's break it down:

83
1. OpenFlow: OpenFlow is a protocol that enables the control of network switches and routers by external
software, often referred to as a controller. It allows network administrators to manage and direct the flow of
network traffic dynamically, making networks more programmable, flexible, and efficient.

2. VLANs (Virtual Local Area Networks): VLANs are a way to logically segment a single physical
network into multiple virtual networks. Each VLAN operates as a distinct broadcast domain, meaning
devices within the same VLAN can communicate directly with each other as if they were on the same
physical network, while devices in different VLANs cannot communicate without routing between them.
VLANs are commonly used to improve network performance, security, and manageability.

Now, when we talk about OpenFlow VLAN support, it means that OpenFlow-enabled switches can be
programmed to handle VLAN-related tasks such as:

• VLAN Tagging: OpenFlow switches can add, remove, or modify VLAN tags on Ethernet frames as
they pass through the network. VLAN tags are used to identify which VLAN a frame belongs to and
are crucial for VLAN-based traffic segregation.

• VLAN Membership: OpenFlow controllers can dynamically assign network devices to different
VLANs based on various criteria such as port, MAC address, or protocol. This allows for flexible
VLAN membership management without manual configuration on individual switches.

• VLAN Routing: OpenFlow controllers can direct traffic between VLANs by programming the
switches to route traffic between VLANs according to predefined policies. This routing can be based
on factors such as VLAN tags, IP addresses, or other packet attributes.

• Quality of Service (QoS): OpenFlow controllers can enforce QoS policies within VLANs by
prioritizing or throttling traffic based on VLAN membership or other packet attributes. This helps
ensure that critical traffic receives preferential treatment over less important traffic.

84
• Security Policies: OpenFlow controllers can enforce security policies at the VLAN level, such as
access control lists (ACLs) or firewall rules, to restrict or permit traffic between VLANs based on
security requirements.

Overall, OpenFlow VLAN support enhances the flexibility, scalability, and manageability of VLAN
deployments by centralizing network control and allowing for dynamic configuration and management of
VLANs through software-defined networking (SDN) principles. This can lead to more efficient network
operations, faster provisioning of network services, and improved network agility.

4.4 NFV Concepts

Chapter 2, “Requirements and Technology,” defined network functions virtualization (NFV) as the
virtualization of network functions by implementing these functions in software and running them on VMs.
NFV is a significant departure from traditional approaches to the design, deployment, and management of
networking services. NFV decouples network functions, such as Network Address Translation (NAT),
firewalling, intrusion detection, Domain Name Service (DNS), and caching, from proprietary hardware
appliances so that they can run in software on VMs. NFV builds on standard VM technologies, extending
their use into the networking domain.

Virtual machine technology, as discussed in Section 7.2, enables migration of dedicated application and
database servers to commercial off-the-shelf (COTS) x86 servers. The same technology can be applied to
network-based devices, including the following:

Network function devices: Such as switches, routers, network access points, customer premises
equipment (CPE), and deep packet inspectors (for deep packet inspection).

Network-related compute devices: Such as firewalls, intrusion detection systems, and network
management systems.

Network-attached storage: File and database servers attached to the network.

In traditional networks, all devices are deployed on proprietary/closed platforms. All network elements are
enclosed boxes, and hardware cannot be shared. Each device requires additional hardware for increased
capacity, but this hardware is idle when the system is running below capacity. With NFV, however, network
elements are independent applications that are flexibly deployed on a unified platform comprising standard

85
servers, storage devices, and switches. In this way, software and hardware are decoupled, and capacity for
each application is increased or decreased by adding or reducing virtual resources (see Figure 7.5).

FIGURE 7.5 Vision for Network Functions Visualization

By broad consensus, the Network Functions Virtualization Industry Standards Group (ISG NFV), created as
part of the European Telecommunications Standards Institute (ETSI), has the lead and indeed almost the sole
role in creating NFV standards. ISG NFV was established in 2012 by seven major telecommunications
network operators. Its membership has since grown to include network equipment vendors, network
technology companies, other IT companies, and service providers such as cloud service providers.

86
ISG NFV published the first batch of specifications in October 2013, and subsequently updated most of those
in late 2014 and early 2015. Table 7.1 shows the complete list of specifications as of early 2015. Table 7.2
provides definitions for a number of terms that are used in the ISG NFV documents and the NFV literature in
general.

TABLE 7.1 ISG NFV Specifications

87
88
TABLE 7.2 NFV Terminology

4.4.1 Simple Example of the Use of NFV

This section considers a simple example from the NFV Architectural Framework document. Part a of Figure
7.6 shows a physical realization of a network service. At a top level, the network service consists of
endpoints connected by a forwarding graph of network functional blocks, called network functions (NFs).
Examples of NFs are firewalls, load balancers, and wireless network access points. In the Architectural
Framework, NFs are viewed as distinct physical nodes. The endpoints are beyond the scope of the NFV
specifications and include all customer-owned devices. So, in the figure, endpoint A could be a smartphone
and endpoint B a content delivery network (CDN) server.

89
FIGURE 7.6 A Simple NFV Configuration Example

Part a of Figure 7.6 highlights the network functions that are relevant to the service provider and customer.
The interconnections among the NFs and endpoints are depicted by dashed lines, representing logical links.
These logical links are supported by physical paths through infrastructure networks (wired or wireless).

A second observation is that two of the VMs in VNF-FG-2 are hosted on the same physical machine.
Because these two VMs perform different functions, they need to be distinct at the virtual resource level but
can be supported by the same physical machine. But this is not required, and a network management function

90
may at some point decide to migrate one of the VMs to another physical machine, for reasons of
performance. This movement is transparent at the virtual resource level.

4.4.2 NFV Principles

As suggested by Figure 7.6, the VNFs are the building blocks used to create end-to-end network services.
Three key NFV principles are involved in creating practical network services:

Service chaining: VNFs are modular and each VNF provides limited functionality on its own. For a given
traffic flow within a given application, the service provider steers the flow through multiple VNFs to achieve
the desired network functionality. This is referred to as service chaining.

Management and orchestration (MANO): This involves deploying and managing the lifecycle of VNF
instances. Examples include VNF instance creation, VNF service chaining, monitoring, relocation,
shutdown, and billing. MANO also manages the NFV infrastructure elements.

Distributed architecture: A VNF may be made up of one or more VNF components (VNFC), each of
which implements a subset of the VNF’s functionality. Each VNFC may be deployed in one or multiple
instances. These instances may be deployed on separate, distributed hosts to provide scalability and
redundancy.

4.4.3 High-Level NFV Framework

Figure 7.7 shows a high-level view of the NFV framework defined by ISG NFV. This framework supports
the implementation of network functions as software-only VNFs. We use this to provide an overview of the
NFV architecture, which is examined in more detail in Chapter 8, “NFV Functionality.”

91
FIGURE 7.7 High-Level NFV Framework

The NFV framework consists of three domains of operation:

Virtualized network functions: The collection of VNFs, implemented in software, that run over the
NFVI.

NFV infrastructure (NFVI): The NFVI performs a virtualization function on the three main categories of
devices in the network service environment: computer devices, storage devices, and network devices.

NFV management and orchestration: Encompasses the orchestration and lifecycle management of
physical/software resources that support the infrastructure virtualization, and the lifecycle management of
VNFs. NFV management and orchestration focuses on all virtualization-specific management tasks
necessary in the NFV framework.

The ISG NFV Architectural Framework document specifies that in the deployment, operation, management
and orchestration of VNFs, two types of relations between VNFs are supported:

VNF forwarding graph (VNF FG): Covers the case where network connectivity between VNFs is
specified, such as a chain of VNFs on the path to a web server tier (for example, firewall, network address
translator, load balancer).

92
VNF set: Covers the case where the connectivity between VNFs is not specified, such as a web server
pool.

4.5 NFV Benefits and Requirements

Having considered on overview of NFV concepts, we can now summarize key benefits of NFV and
requirements for successful implementation.

4.5.1 NFV Benefits

If NFV is implemented efficiently and effectively, it can provide a number of benefits compared to
traditional networking approaches. The following are the most important potential benefits:

Reduced CapEx, by using commodity servers and switches, consolidating equipment, exploiting
economies of scale, and supporting pay-as-you grow models to eliminate wasteful overprovisioning. This is
perhaps the main driver for NFV.

Reduced OpEx, in terms of power consumption and space usage, by using commodity servers and
switches, consolidating equipment, and exploiting economies of scale, and reduced network management
and control expenses. Reduced CapeX and OpEx are perhaps the main drivers for NFV.

The ability to innovate and roll out services quickly, reducing the time to deploy new networking services
to support changing business requirements, seize new market opportunities, and improve return on
investment of new services. Also lowers the risks associated with rolling out new services, allowing
providers to easily trial and evolve services to determine what best meets the needs of customers.

Ease of interoperability because of standardized and open interfaces.

Use of a single platform for different applications, users and tenants. This allows network operators to
share resources across services and across different customer bases.

Provided agility and flexibility, by quickly scaling up or down services to address changing demands.

Targeted service introduction based on geography or customer sets is possible. Services can be rapidly
scaled up/down as required.

A wide variety of ecosystems and encourages openness. It opens the virtual appliance market to pure
software entrants, small players and academia, encouraging more innovation to bring new services and new
revenue streams quickly at much lower risk.

93
4.5.2 NFV Requirements

To deliver these benefits, NFV must be designed and implemented to meet a number of requirements and
technical challenges, including the following [ISGN12]:

Portability/interoperability: The capability to load and execute VNFs provided by different vendors on a
variety of standardized hardware platforms. The challenge is to define a unified interface that clearly
decouples the software instances from the underlying hardware, as represented by VMs and their
hypervisors.

Performance trade-off: Because the NFV approach is based on industry standard hardware (that is,
avoiding any proprietary hardware such as acceleration engines), a probable decrease in performance has to
be taken into account. The challenge is how to keep the performance degradation as small as possible by
using appropriate hypervisors and modern software technologies, so that the effects on latency, throughput,
and processing overhead are minimized.

Migration and coexistence with respect to legacy equipment: The NFV architecture must support a
migration path from today’s proprietary physical network appliance-based solutions to more open standards-
based virtual network appliance solutions. In other words, NFV must work in a hybrid network composed of
classical physical network appliances and virtual network appliances. Virtual appliances must therefore use
existing northbound Interfaces (for management and control) and interwork with physical appliances
implementing the same functions.

Management and orchestration: A consistent management and orchestration architecture is required.


NFV presents an opportunity, through the flexibility afforded by software network appliances operating in an
open and standardized infrastructure, to rapidly align management and orchestration northbound interfaces to
well defined standards and abstract specifications.

Automation: NFV will scale only if all the functions can be automated. Automation of process is
paramount to success.

Security and resilience: The security, resilience, and availability of their networks should not be impaired
when VNFs are introduced.

Network stability: Ensuring stability of the network is not impacted when managing and orchestrating a
large number of virtual appliances between different hardware vendors and hypervisors. This is particularly
important when, for example, virtual functions are relocated, or during reconfiguration events (for example,
because of hardware and software failures) or because of cyber-attack.

Simplicity: Ensuring that virtualized network platforms will be simpler to operate than those that exist
today. A significant focus for network operators is simplification of the plethora of complex network

94
platforms and support systems that have evolved over decades of network technology evolution, while
maintaining continuity to support important revenue generating services.

Integration: Network operators need to be able to “mix and match” servers from different vendors,
hypervisors from different vendors, and virtual appliances from different vendors without incurring
significant integration costs and avoiding lock-in. The ecosystem must offer integration services and
maintenance and third-party support; it must be possible to resolve integration issues between several parties.
The ecosystem will require mechanisms to validate new NFV products.

4.6 NFV Reference Architecture

Figure 7.7 provided a high-level view of the NFV framework. Figure 7.8 shows a more detailed look at the
ISG NFV reference architectural framework. You can view this architecture as consisting of four major
blocks:

FIGURE 7.8 NFV Reference Architectural Framework

95
NFV infrastructure (NFVI): Comprises the hardware and software resources that create the environment
in which VNFs are deployed. NFVI virtualizes physical computing, storage, and networking and places them
into resource pools.

VNF/EMS: The collection of VNFs implemented in software to run on virtual computing, storage, and
networking resources, together with a collection of element management systems (EMS) that manage the
VNFs.

NFV management and orchestration (NFV-MANO): Framework for the management and orchestration
of all resources in the NFV environment. This includes computing, networking, storage, and VM resources.

OSS/BSS: Operational and business support systems implemented by the VNF service provider.

It is also useful to view the architecture as consisting of three layers. The NFVI together with the virtualized
infrastructure manager provide and manage the virtual resource environment and its underlying physical
resources. The VNF layer provides the software implementation of network functions, together with element
management systems and one or more VNF managers. Finally, there is a management, orchestration, and
control layer consisting of OSS/BSS and the NFV orchestrator.

4.6.1 NFV Management and Orchestration

The NFV management and orchestration facility includes the following functional blocks:

NFV orchestrator: Responsible for installing and configuring new network services (NS) and virtual
network function (VNF) packages, NS lifecycle management, global resource management, and validation
and authorization of NFVI resource requests.

VNF manager: Oversees lifecycle management of VNF instances.

Virtualized infrastructure manager: Controls and manages the interaction of a VNF with computing,
storage, and network resources under its authority, in addition to their virtualization.

4.6.2 Reference Points

Figure 7.8 also defines a number of reference points that constitute interfaces between functional blocks. The
main (named) reference points and execution reference points are shown by solid lines and are in the scope
of NFV. These are potential targets for standardization. The dashed line reference points are available in
present deployments but might need extensions for handling network function virtualization. The dotted
reference points are not a focus of NFV at present.

The main reference points include the following considerations:

96
Vi-Ha: Marks interfaces to the physical hardware. A well-defined interface specification will facilitate for
operators sharing physical resources for different purposes, reassigning resources for different purposes,
evolving software and hardware independently, and obtaining software and hardware component from
different vendors.

Vn-Nf: These interfaces are APIs used by VNFs to execute on the virtual infrastructure. Application
developers, whether migrating existing network functions or developing new VNFs, require a consistent
interface the provides functionality and the ability to specify performance, reliability, and scalability
requirements.

Nf-Vi: Marks interfaces between the NFVI and the virtualized infrastructure manager (VIM). This
interface can facilitate specification of the capabilities that the NFVI provides for the VIM. The VIM must be
able to manage all the NFVI virtual resources, including allocation, monitoring of system utilization, and
fault management.

Or-Vnfm: This reference point is used for sending configuration information to the VNF manager and
collecting state information of the VNFs necessary for network service lifecycle management.

Vi-Vnfm: Used for resource allocation requests by the VNF manager and the exchange of resource
configuration and state information.

Or-Vi: Used for resource allocation requests by the NFV orchestrator and the exchange of resource
configuration and state information.

Os-Ma: Used for interaction between the orchestrator and the OSS/BSS systems.

Ve-Vnfm: Used for requests for VNF lifecycle management and exchange of configuration and state
information.

Se-Ma: Interface between the orchestrator and a data set that provides information regarding the VNF
deployment template, VNF forwarding graph, service-related information, and NFV infrastructure
information models.

4.6.3 Implementation

As with SDN, success for NFV requires standards at appropriate interface reference points and open source
software for commonly used functions. For several years, ISG NFV is working on standards for the various
interfaces and components of NFV. In September of 2014, the Linux Foundation announced the Open
Platform for NFV (OPNFV) project. OPNFV aims to be a carrier-grade, integrated platform that introduces
new products and services to the industry more quickly. The key objectives of OPNFV are as follows:

97
Develop an integrated and tested open source platform that can be used to investigate and demonstrate
core NFV functionality.

Secure proactive participation of leading end users to validate that OPNFV releases address participating
operators’ needs.

Influence and contribute to the relevant open source projects that will be adopted in the OPNFV reference
platform.

Establish an open ecosystem for NFV solutions based on open standards and open source software.

Promote OPNFV as the preferred open reference platform to avoid unnecessary and costly duplication of
effort.

OPNFV and ISG NFV are independent initiatives but it is likely that they will work closely together to
assure that OPNFV implementations remain within the standardized environment defined by ISG NFV.

The initial scope of OPNFV will be on building NFVI, VIM, and including application programmable
interfaces (APIs) to other NFV elements, which together form the basic infrastructure required for VNFs and
MANO components. This scope is highlighted in Figure 7.9 as consisting of NFVI and VMI. With this
platform as a common base, vendors can add value by developing VNF software packages and associated
VNF manager and orchestrator software.

98
FIGURE 7.9 NFV Implementation

UNIT -V NFV FUNCTIONALITY

5.1 Network Functions Virtualization

The term “Network Functions Virtualization” (NFV) refers to the use of virtual machines in place of physical
network appliances. There is a requirement for a hypervisor to operate networking software and procedures
like load balancing and routing by virtual computers. A network functions virtualization standard was first
proposed at the OpenFlow World Congress in 2012 by the European Telecommunications Standards Institute
(ETSI), a group of service providers that includes AT&T, China Mobile, BT Group, Deutsche Telekom, and
many more.

5.1.1 Need of NFV

With the help of NFV, it becomes possible to separate communication services from specialized hardware
like routers and firewalls. This eliminates the need for buying new hardware and network operations can

99
offer new services on demand. With this, it is possible to deploy network components in a matter of hours as
opposed to months as with conventional networking. Furthermore, the virtualized services can run on less
expensive generic servers.

5.1.2 Advantages:

• Lower expenses as it follows Pay as you go which implies companies only pay for what they require.
• Less equipment as it works on virtual machines rather than actual machines which leads to fewer
appliances, which lowers operating expenses as well.
• Scalability of network architecture is quite quick and simple using virtual functions in NFV. As a
result, it does not call for the purchase of more hardware.

5.1.3 Working:

Usage of software by virtual machines enables to carry out the same networking tasks as conventional
hardware. The software handles the task of load balancing, routing, and firewall security. Network engineers
can automate the provisioning of the virtual network and program all of its various components using a
hypervisor or software-defined networking controller.

5.1.4 Benefits of NFV:

• Many service providers believe that advantages outweigh the issues of NFV.

• Traditional hardware-based networks are time-consuming as these require network administrators to


buy specialized hardware units, manually configure them, then join them to form a network. For this skilled
or well-equipped worker is required.

• It costs less as it works under the management of a hypervisor, which is significantly less expensive
than buying specialized hardware that serves the same purpose.

100
• Easy to configure and administer the network because of a virtualized network. As a result, network
capabilities may be updated or added instantly.

5.1.5 Risks of NFV:

Security hazards do exist, though, and network functions virtualization security issues have shown to be a
barrier to widespread adoption among telecom companies. The following are some dangers associated with
implementing network function virtualization that service providers should take into account:
• Physical security measures do not work: Comparing virtualized network components to locked-down
physical equipment in a data center enhances their susceptibility to new types of assaults.
• Malware is difficult to isolate and contain: Malware travels more easily among virtual components
running on the same virtual computer than between hardware components that can be isolated or physically
separated.
• Network activity is less visible: Because traditional traffic monitoring tools struggle to detect
potentially malicious anomalies in network traffic going east-west between virtual machines, NFV
necessitates more fine-grained security solutions.

5.2 Virtualized Network Functions

A VNF is a virtualized implementation of a traditional network function. Table 8.3 contains examples of
functions that could be virtualized.

101
TABLE 8.3 Potential Network Functions to Be Virtualized

5.2.1 VNF Interfaces

As discussed earlier, a VNF consists of one or more VNF components (VNFCs). The VNFCs of a single
VNF are connected internal to the VNF. This internal structure is not visible to other VNFs or to the VNF
user.

Figure 8.6 shows the interfaces relevant to a discussion of VNFs as described in the list that follows.

102
FIGURE 8.6 VNF Functional View

SWA-1: This interface enables communication between a VNF and other VNFs, PNFs, and endpoints.
Note that the interface is to the VNF as a whole and not to individual VNFCs. SWA-1 interfaces are logical
interfaces that primarily make use of the network connectivity services available at the SWA-5 interface.

SWA-2: This interface enables communications between VNFCs within a VNF. This interface is vendor
specific and therefore not a subject for standardization. This interface may also make use of the network
connectivity services available at the SWA-5 interface. However, if two VNFCs within a VNF are deployed
on the same host, other technologies may be used to minimize latency and enhance throughput, as described
below.

SWA-3: This is the interface to the VNF manager within the NFV management and orchestration module.
The VNF manager is responsible for lifecycle management (creation, scaling, termination, and so on). The
interface typically is implemented as a network connection using IP.

SWA-4: This is the interface for runtime management of the VNF by the element manager.

SWA-5: This interface describes the execution environment for a deployable instance of a VNF. Each
VNFC maps to a virtualized container interface to a VM.

5.2.2 VNFC to VNFC Communication

As mentioned earlier, the internal structure of a VNF, in terms of multiple VNFCs, is not exposed externally.
The VNF appears as a single functional system in the network it supports. However, internal connectivity
between VNFCs within the same VNF or across co-located VNFs needs to be specified by the VNF provider,

103
supported by the NFVI, and managed by the VNF manager. The VNF Architecture document describes a
number of architecture design models that are intended to provide desired performance and quality of service
(QoS), such as access to storage or compute resources. One of the most important of these design models
relates to communication between VNFCs.

Figure 8.7, from the ETSI VNF Architecture document, illustrates six scenarios using different network
technologies to support communication between VNFCs:

FIGURE 8.7 VNFC to VNFC Communication

1. Communication through a hardware switch. In this case, the VMs supporting the VNFCs bypass the
hypervisor to directly access the physical NIC. This provides enhanced performance for VNFCs on different
physical hosts.

104
2. Communication through the vswitch in the hypervisor. This is the basic method of communication
between co-located VNFCs but does not provide the QoS or performance that may be required for some
VNFs.

3. Greater performance can be achieved by using appropriate data processing acceleration libraries and
drivers compatible with the CPU being used. The library is called from the vswitch. An example of a suitable
commercial product is the Data Plane Development Kit (DPDK), which is a set of data plane libraries and
network interface controller drivers for fast packet processing on Intel architecture platforms. Scenario 3
assumes a Type 1 hypervisor (see Figure 7.3).

4. Communication through an embedded switch (eswitch) deployed in the NIC with Single Root I/O
Virtualization (SR-IOV). SR-IOV is a PCI-SIG specification that defines a method to split a device into
multiple PCI express requester IDs (virtual functions) in a fashion that allows an I/O memory management
unit (MMU) to distinguish different traffic streams and apply memory and interrupt translations so that these
traffic streams can be delivered directly to the appropriate VM, and in a way that prevents nonprivileged
traffic flows from impacting other VMs.

5. Embedded switch deployed in the NIC hardware with SR-IOV, and with data plane acceleration software
deployed in the VNFC.

6. A serial bus connects directly two VNFCs that have extreme workloads or very low-latency requirements.
This is essentially an I/O channel means of communication rather than a NIC means.

5.2.3 VNF Scaling

An important property of VNFs is referred to as elasticity, which simply means the ability to scale up/down
or scale out/in. Every VNF has associated with it an elasticity parameter of no elasticity, scale up/down only,
scale out/in only, or both scale up/down and scale out/in.

A VNF is scaled by scaling one or more of its constituent VNFCs. Scale out/in is implemented by
adding/removing VNFC instances that belong to the VNF being scaled. Scale up/down is implemented by
adding/removing resources from existing VNFC instances that belong to the VNF being scaled.

5.3 NFV Management and Orchestration

The NFV management and orchestration (MANO) component of NFV has as its primary function the
management and orchestration of an NFV environment. This task, by itself, is complex. Further complicating
MANO functionality is its need to interoperate with and cooperate with existing operations support systems
(OSS) and business support systems (BSS) in providing management functionality for customers whose
networking environment consists of a mixture of physical and virtual elements.

105
Figure 8.8, from the ETSI MANO document, shows the basic structure of NFV-MANO and its key
interfaces. As can be seen, there are five management blocks: three within NFV-MANO, EMS associated
with VNFs, and OSS/BSS. These two latter blocks are not part of MANO but do exchange information with
MANO for the purpose of the overall management of a customer’s networking environment.

FIGURE 8.8 The NFV-MANO Architectural Framework with Reference Points

5.3.1 Virtualized Infrastructure Manager

Virtualized infrastructure management (VIM) comprises the functions that are used to control and manage
the interaction of a VNF with computing, storage, and network resources under its authority, as well as their
virtualization. A single instance of a VIM is responsible for controlling and managing the NFVI compute,
storage, and network resources, usually within one operator’s infrastructure domain. This domain could
consist of all resources within an NFVI-PoP, resources across multiple NFVI-PoPs, or a subset of resources
within an NFVI-PoP. To deal with the overall networking environment, multiple VIMs within a single
MANO may be needed.

A VIM performs the following:

Inventory of software (for example, hypervisors), computing, storage and network resources dedicated to
NFV infrastructure.

106
Allocation of virtualization enablers, for example, VMs onto hypervisors, compute resources, storage, and
relevant network connectivity

Management of infrastructure resource and allocation, for example, increase resources to VMs, improve
energy efficiency, and resource reclamation

Visibility into and management of the NFV infrastructure

Root cause analysis of performance issues from the NFV infrastructure perspective

Collection of infrastructure fault information

Collection of information for capacity planning, monitoring, and optimization

5.3.2 Virtual Network Function Manager

A VNF manager (VNFM) is responsible for VNFs. Multiple VNFMs may be deployed; a VNFM may be
deployed for each VNF, or a VNFM may serve multiple VNFs. Among the functions that a VNFM performs
are the following:

VNF instantiation, including VNF configuration if required by the VNF deployment template (for
example, VNF initial configuration with IP addresses before completion of the VNF instantiation operation)

VNF instantiation feasibility checking, if required

VNF instance software update/upgrade

VNF instance modification

VNF instance scaling out/in and up/down

VNF instance-related collection of NFVI performance measurement results and faults/events information,
and correlation to VNF instance-related events/faults

VNF instance assisted or automated healing

VNF instance termination

VNF lifecycle management change notification

Management of the integrity of the VNF instance through its lifecycle

Overall coordination and adaptation role for configuration and event reporting between the VIM and the
EM

107
5.3.3 NFV Orchestrator

The NFV orchestrator (NFVO) is responsible for resource orchestration and network service orchestration.

Resource orchestration manages and coordinates the resources under the management of different VIMs.
NFVO coordinates, authorizes, releases and engages NFVI resources among different PoPs or within one
PoP. This does so by engaging with the VIMs directly through their northbound APIs instead of engaging
with the NFVI resources directly.

Network services orchestration manages/coordinates the creation of an end-to-end service that involves
VNFs from different VNFMs domains. Service orchestration does this in the following way:

It creates end-to-end service between different VNFs. It achieves this by coordinating with the respective
VNFMs so that it does not need to talk to VNFs directly. An example is creating a service between the base
station VNFs of one vendor and core node VNFs of another vendor.

It can instantiate VNFMs, where applicable.

It does the topology management of the network services instances (also called VNF forwarding graphs).

5.3.4 Repositories

Associated with NFVO are four repositories of information needed for the management and orchestration
functions:

Network services catalog: List of the usable network services. A deployment template for a network
service in terms of VNFs and description of their connectivity through virtual links is stored in NS catalog
for future use.

VNF catalog: Database of all usable VNF descriptors. A VNF descriptor (VNFD) describes a VNF in
terms of its deployment and operational behavior requirements. It is primarily used by VNFM in the process
of VNF instantiation and lifecycle management of a VNF instance. The information provided in the VNFD is
also used by the NFVO to manage and orchestrate network services and virtualized resources on NFVI.

NFV instances: List containing details about network services instances and related VNF instances.

NFVI resources: List of NFVI resources utilized for the purpose of establishing NFV services.

5.3.5 Element Management

The element management is responsible for fault, configuration, accounting, performance, and security
(FCAPS) management functionality for a VNF. These management functions are also the responsibility of
the VNFM. But EM can do it through a proprietary interface with the VNF in contrast to VNFM. However,

108
EM needs to make sure that it exchanges information with VNFM through open reference point (VeEm-
Vnfm). The EM may be aware of virtualization and collaborate with VNFM to perform those functions that
require exchange of information regarding the NFVI resources associated with VNF. EM functions include
the following:

Configuration for the network functions provided by the VNF

Fault management for the network functions provided by the VNF

Accounting for the usage of VNF functions

Collecting performance measurement results for the functions provided by the VNF

Security management for the VNF functions

5.3.6 OSS/BSS

The OSS/BSS are the combination of the operator’s other operations and business support functions that are
not otherwise explicitly captured in the present architectural framework, but are expected to have
information exchanges with functional blocks in the NFV-MANO architectural framework. OSS/BSS
functions may provide management and orchestration of legacy systems and may have full end-to-end
visibility of services provided by legacy network functions in an operator’s network.

In principle, it would be possible to extend the functionalities of existing OSS/BSS to manage VNFs and
NFVI directly, but that may be a proprietary implementation of a vendor. Because NFV is an open platform,
managing NFV entities through open interfaces (as that in MANO) makes more sense. The existing
OSS/BBS, however, can add value to the NFV MANO by offering additional functions if they are not
supported by a certain implementation of NFV MANO. This is done through an open reference point (Os-
Ma) between NFV MANO and existing OSS/BSS.

5.4 NFV Use Cases

ISG NFV has developed a representative set of service models and high-level use cases that may be
addressed by NFV. These use cases are intended to drive further development of standards and products for
network-wide implementation. The Use Cases document identifies and describes a first set of service models
and high-level use cases that represent, in the view of NFV ISG member companies, important service
models and initial fields of application for NFV, and that span the scope of technical challenges being
addressed by the NFV ISG.

There are currently nine use cases, which can be divided into the categories of architectural use cases and
service-oriented use cases, as described in Table 8.4.

109
TABLE 8.4 ETSI NFV Use Cases

5.4.1 Architectural Use Cases

The four architectural use cases focus on providing general-purpose services and applications based on the
NFVI architecture.

5.4.1.1 NFVI as a Service

NFVIaaS is a scenario in which a service provider implements and deploys an NFVI that may be used to
support VNFs both by the NFVIaaS provider and by other network service providers. For the NFVIaaS
provider, this service provides for economies of scale. The infrastructure is sized to support the provider’s
own needs for deploying VNFs and extra capacity that can be sold to other providers. The NFVIaaS
customer can offer services using the NFVI of another service provider. The NFVIaaS customer has

110
flexibility in rapidly deploying VNFs, either for new services or to scale out existing services. Cloud
computing providers may find this service particularly attractive.

Figure 8.9 provides an example [ONF14]. Service provider X offers a virtualized load balancing service.
Some of carrier X’s customers need load balancing services at locations where X does not maintain NFVI,
but where service provider Z does. NFVIaaS offers a means for carrier Z to lease NFV infrastructure
(computer, network, hypervisors, and so on) to service provider X, which gives the latter access to
infrastructure that would otherwise be prohibitively expensive to obtain. Through leasing, such capacity is
available on demand, and can be scaled as needed.

FIGURE 8.9 NFVIaaS Example

5.4.1.2 VNF as a Service

Whereas NFVIaaS is similar to the cloud model of Infrastructure as a Service (IaaS), VNFaaS corresponds to
the cloud model of Software as a Service (SaaS). NFVIaaS provides the virtualization infrastructure to
enable a network service provider to develop and deploy VNFs with reduced cost and time compared to
implementing the NFVI and the VNFs. With VNFaaS, a provider develops VNFs that are then available off
the shelf to customers. This model is well suited to virtualizing customer premises equipment such as routers
and firewalls.

5.4.1.3 Virtual Network Platform as a Service

VNPaaS is similar to an NFVIaaS that includes VNFs as components of the virtual network infrastructure.
The primary differences are the programmability and development tools of the VNPaaS that allow the
subscriber to create and configure custom ETSI NFV-compliant VNFs to augment the catalog of VNFs
offered by the service provider. This allows all the third-party and custom VNFs to be orchestrated via the
VNF FG.

111
5.4.1.4 VNF Forwarding Graphs

VNF FG allows virtual appliances to be chained together in a flexible manner. This technique is called
service chaining. For example, a flow may pass through a network monitoring VNF, a load-balancing VNF,
and finally a firewall VNF in passing from one endpoint to another. The VNF FG use case is based on an
information model that describes the VNFs and physical entities to the appropriate
management/orchestration systems used by the service provider. The model describes the characteristics of
the entities including the NFV infrastructure requirements of each VNF and all the required connections
among VNFs and between VNFs and the physical network included in the IaaS service. To ensure the
required performance and resiliency of the end-to-end service, the information model must be able to specify
the capacity, performance and resiliency requirements of each VNF in the graph. To meet SLAs, the
management and orchestration system will need to monitor the nodes and linkages included in the service
graph. In theory, a VNF FG can span the facilities of multiple network service providers.

5.4.1.5 Service-Oriented Use Cases

These use cases focus on the provision of services to end customers, in which the underlying infrastructure is
transparent.

Virtualization of Mobile Core Network and IP Multimedia Subsystem

Mobile cellular networks have evolved to contain a variety of interconnected network function elements,
typically involving a large variety of proprietary hardware appliances. NFV aims at reducing the network
complexity and related operational issues by leveraging standard IT virtualization technologies to
consolidate different types of network equipment onto industry standard high-volume servers, switches, and
storage, located in NFVI-PoPs.

Virtualization of Mobile Base Station

The focus of this use case is radio access network (RAN) equipment in mobile networks. RAN is the part of
a telecommunications system that implements a wireless technology to access the core network of the mobile
network service provider. At minimum, it involves hardware on the customer premises or in the mobile
device and equipment forming a base station for access to the mobile network. There is the possibility that a
number of RAN functions can be virtualized as VNFs running on industry standard infrastructure.

Virtualization of the Home Environment

This use case deals with network provider equipment located as customer premises equipment (CPE) in a
residential location. These CPE devices mark the operator/service provider presence at the customer
premises and usually include a residential gateway (RGW) for Internet and Voice over IP (VoIP) services (for

112
example, a modem/router for digital subscriber line [DSL] or cable), and a set-top box (STB) for media
services normally supporting local storage for personal video recording (PVR) services. NFV technologies
become ideal candidates to support this concentration of computation workload from formerly dispersed
functions with minimal cost and improved time to market, while new services can be introduced as required
on a grow-as-you-need basis. Further, the VNFs can reside on services in the network service provider’s PoP.
This greatly simplifies the electronics environment of the home, reducing end user and operator capital
expenditure (CapEx).

Virtualization of CDNs

Delivery of content, especially of video, is one of the major challenges of all operator networks because of
the massive growing amount of traffic to be delivered to end customers of the network. The growth of video
traffic is driven by the shift from broadcast to unicast delivery via IP, by the variety of devices used for video
consumption and by increasing quality of video delivered via IP networks in resolution and frame rate.

Complementary to the growth of today’s video traffic, the requirements on quality are also evolving: Internet
actors are more and more in position to provide both live and on-demand content services to Internet end
users, with similar quality constraints as for traditional TV service of network operators.

Some Internet service providers (ISPs) are deploying proprietary Content Delivery Network (CDN) cache
nodes in their networks to improve delivery of video and other high-bandwidth services to their customers.
Cache nodes typically run on dedicated appliances running on custom or industry standard server platforms.
Both CDN cache nodes and CDN control nodes can potentially be virtualized. The benefits of CDN
virtualization are similar to those gained in other NFV use cases, such as VNFaaS.

Fixed Access Network Functions Virtualization

NFV offers the potential to virtualize remote functions in the hybrid fiber/copper access network and passive
optical network (PON) fiber to the home and hybrid fiber/wireless access networks. This use case has the
potential for cost savings by moving complex processing closer to the network. An additional benefit is that
virtualization supports multiple tenancy, in which more than one organizational entity can either be allocated,
or given direct control of, a dedicated partition of a virtual access node. Finally, virtualizing broadband
access nodes can enable synergies to be exploited by the co-location of wireless access nodes in a common
NFV platform framework (that is, common NFVI-PoPs), thereby improving the deployment economics and
reducing the overall energy consumption of the combined solution.

An indication of the relative importance of the various use cases is found in a survey of 176 network
professionals from a range of industries, reported in 2015 Guide to SDN and NFV [METZ14] and conducted
in late 2014. The survey respondents were asked to indicate the two use cases that they think will gain the

113
most traction in the market over the next two years. Table 8.5 shows their responses. The data in Table 8.5
indicates that although IT organizations have interest in a number of the ETSI-defined use cases, by a wide
margin they are most interested in the NFVIaaS use case.

TABLE 8.5 Interest in ETSI NFV Use Cases

5.5 SDN and NFV

Over the past few years, the hottest topics in networking have been SDN and NFV. Separate standards bodies
are pursuing the two technologies, and a large, growing number of providers have announced or are working
on products in the two fields. Each technology can be implemented and deployed separately, but there is
clearly a potential for added value by the coordinated use of both technologies. It is likely that over time,
SDN and NFV will tightly interoperate to provide a broad, unified software-based networking approach to
abstract and programmatically control network equipment and network-based resources.

The relationship between SDN and NFV is perhaps viewed as SDN functioning as an enabler of NFV. A
major challenge with NFV is to best enable the user to configure a network so that VNFs running on servers
are connected to the network at the appropriate place, with the appropriate connectivity to other VNFs, and
with desired QoS. With SDN, users and orchestration software can dynamically configure the network and
the distribution and connectivity of VNFs. Without SDN, NFV requires much more manual intervention,
especially when resources beyond the scope of NFVI are part of the environment.

The Kemp Technologies Blog [MCMU14] gives the example of load balancing where load balancer services
are implemented as VNF entities. If demand for load-balancing capacity increases, a network orchestration
layer can rapidly spin up new load-balancing instances and also adjust the network switching infrastructure
to accommodate the changed traffic patterns. In turn, the load-balancing VNF entity can interact with the

114
SDN controller to assess network performance and capacity and use this additional information to balance
traffic better, or even to request provisioning of additional VNF resources.

Some of the ways that ETSI believes that NFV and SDN complement each other include the following:

The SDN controller fits well into the broader concept of a network controller in an NFVI network domain.

SDN can play a significant role in the orchestration of the NFVI resources, both physical and virtual,
enabling functionality such as provisioning, configuration of network connectivity, bandwidth allocation,
automation of operations, monitoring, security, and policy control.

SDN can provide the network virtualization required to support multitenant NFVIs.

Forwarding graphs can be implemented using the SDN controller to provide automated provisioning of
service chains, while ensuring strong and consistent implementation of security and other policies.

The SDN controller can be run as a VNF, possibly as part of a service chain including other VNFs. For
example, applications and services originally developed to run on the SDN controller could also be
implemented as separate VNFs.

Figure 8.10, from the ETSI VNF Architecture document, indicates the potential relationship between SDN
and NFV. The arrows can be described as follows:

115
FIGURE 8.10 Mapping of SDN Components with NFV Architecture

SDN enabled switch/NEs include physical switches, hypervisor virtual switches, and embedded switches
on the NICs.

Virtual networks created using an infrastructure network SDN controller provide connectivity services
between VNFC instances.

SDN controller can be virtualized, running as a VNF with its EM and VNF manager. Note that there may
be SDN controllers for the physical infrastructure, the virtual infrastructure, and the virtual and physical
network functions. As such, some of these SDN controllers may reside in the NFVI or management and
orchestration (MANO) functional blocks (not shown in figure).

SDN enabled VNF includes any VNF that may be under the control of an SDN controller (for example,
virtual router, virtual firewall).

116
SDN applications, for example service chaining applications, can be VNF themselves.

Nf-Vi interface allows management of the SDN enabled infrastructure.

Ve-Vnfm interface is used between the SDN VNF (SDN controller VNF, SDN network functions VNF,
SDN applications VNF) and their respective VNF Manager for lifecycle management.

Vn-Nf allows SDN VNFs to access connectivity services between VNFC interfaces.

117

You might also like