Abel John Project
Abel John Project
Abel John Project
April, 2018.
DECLARATION
I hereby declare that this project was written by me and as a result of my own
research efforts. It has not been presented in any previous publication for a high
degree of this or any other University. All citations and sources of information
are clearly acknowledged by means of references.
----------------------------------------
SALAMI OMOTAYO TAIBAT
CERTIFICATION
This is to certify that this project work was carried out by SALAMI OMOTAYO TAIBAT
with matriculation number NOU151062171 in the Department Of Computer science, School
of Science and technology, National Open University of Nigeria, Lagos Study Centre, Lagos.
In partial fulfillment of the requirement for the award of Master of Science in Information
Technology.
.................................................. ..................................................
Dr. Olalekan OGUNJIMI DATE
SUPERVISOR
.................................................. ..................................................
Prof. Justus A. SOKEFUN DATE
STUDY CENTRE DIRECTOR
.................................................. ..................................................
Prof. Monioluwa OLANIYI DATE
DEAN, SCHOOL OF SCIENCE
ACKNOWLEDGEMENT
First and foremost, my gratitude goes to God Almighty, who has been so merciful and
generous in my life.
I am highly indebted to my parents, Mr. & Mrs. Salami, who have vowed to leave no stone
unturned in their quest to give me formal education. God bless them.
Last but not the least, when a tedious job has been done, one has to refer back and think who
and who contributed towards the success of the job. As a matter of fact, I must acknowledge
my debt of gratitude to my project supervisor, Dr. Ogunjimi Olalekan without whose
diligent guidance and advice this study would not have seen the light of the day.
I also wish to place on record the invaluable help and commitment rendered by my friends
and well-wishers for the moral and financial support given to me throughout the course of
my study.
Finally, my special thanks go to my humble sisters and brothers for their understanding, love,
caring and financial assistance in the time of difficulties throughout the period of my staying
in the school.
My prayer is that God will reward those that contributed immensely for me towards this
project directly or indirectly.
Abstract
A large amount of today's communication occurs within data centers where a large
number of virtual servers (running one or more virtual machines) provide service
providers with the infrastructure needed for their applications and services. In this thesis,
we will look at the next step in the virtualization revolution, the virtualized network.
Software-defined networking (SDN) is a relatively new concept that is moving the field
towards a more software-based solution to networking. Today when a packet is
forwarded through a network of routers, decisions are made at each router as to
which router is the next hop destination for the packet. With SDN these decisions are
made by a centralized SDN controller that decides upon the best path and instructs the
devices along this path as to what action each should perform. Taking SDN to its extreme
minimizes the physical network components and increases the number of virtualized
components. The reasons behind this trend are several, although the most prominent are
simplified processing and network administration, a greater degree of automation,
increased flexibility, and shorter provisioning times. This in turn leads to a reduction in
operating expenditures and capital expenditures for data center owners, which both drive
the further development of this technology.
Virtualization has been gaining ground in the last decade. However, the initial
introduction of virtualization began in the 1970s with server virtualization offering the
ability to create several virtual server instances on one physical server. Today we already
have taken small steps towards a virtualized network by virtualization of network
equipment such as switches, routers, and firewalls. Common to virtualization is that it is
in early stages all of the technologies have encountered trust issues and general concerns
related to whether software-based solutions are as rugged and reliable as hardware-
based solutions. SDN has also encountered these issues, and discussion of these issues
continues among both believers and skeptics. Concerns about trust remain a problem for
the growing number of cloud-based services where multitenant deployments may lead to
loss of personal integrity and other security risks. As a relatively new technology, SDN is
still immature and has a number of vulnerabilities. As with most software-based
solutions, the potential for security risks increases. This thesis investigates how denial-of-
service (DoS) attacks affect an SDN environment and a single- threaded controller,
described by text and via simulations.
Abstract .......................................................................................
Table of contents .....................................................................
List of acronyms and abbreviations........................................
1 Introduction ..........................................................................
1.1 General introduction to SDN ......................................................
1.1.1 The security issues ...........................................................
1.1.2 Multi-tenancy ...................................................................
1.2 Problem definition .............................................................
1.3 Purpose ..............................................................................
1.4 Research Methodology ...............................................................
1.5 Delimitations ................................................................................
1.6 Structure of this thesis ...............................................................
2 Background ..........................................................................
2.1 OpenFlow: The first SDN standard ............................................
2.2 Denial-of-service-attacks ............................................................
2.3 Layer two interconnections ........................................................
2.3.1 Network bridges ................................................................
2.3.2 Multi-layer Protocol Label Switching .................................
2.3.3 Spanning Tree Protocol ....................................................
2.3.4 FabricPath ........................................................................
2.4 Solving multi-tenancy in SDN...................................................
2.4.1 VLAN ..............................................................................
2.4.2 VXLAN ............................................................................
2.4.3 Virtualization ...................................................................
2.5 Cloud computing service models ............................................
2.6 Cloud Environments .................................................................
2.7 Related work ..............................................................................
2.7.1 Major related work ..........................................................
2.7.2 Minor related work ..........................................................
3 Methodology .......................................................................
3.1 Research Process .....................................................................
3.2 Simulation environment............................................................
3.3 Evaluation of the work process................................................
4 Analysis...............................................................................
4.1 Resource consumption issues.................................................
4.2 Multi-tenancy issues .................................................................
4.3 Benchmarking OpenFlow Controller .......................................
4.3.1 Switches being the differentiator .....................................
4.3.2 MAC-addresses (hosts) being the differentiator..............
Our initial task was to find the most significant security risks when implementing SDN within
one of TeliaSonera’s data centers and to suggest how to manage and/or mitigate these risks.
After our literature study and meetings with our advisers at TeliaSonera the most critical
security risks in an SDN environment that were identified were Distributed Denial of Service
(DDoS) attacks and overall multi-tenancy issues. However, because SDN is not (yet) widely
implemented it is hard to come to definite conclusions.
The rest of this chapter describes the specific problem that this thesis addresses, the context of
the problem, the goals of this thesis project, and outlines the structure of the thesis.
By decoupling the data and control planes routing decisions can be centralized and made by
software, rather than decentralized decisions at every router within the network. In SDN, the
network is controlled through an application-programming interface (API). This API enables
innovation and offers new possibilities for configuring, managing, and optimizing the network
for specific flows of traffic. This in turn offers great opportunities for controlling and
adapting the network to meet specific needs during everyday usage.
The separation of the control- and data plane introduces a need for some protocol to support the
communication between these two planes. One such protocol is called “OpenFlow”. This
protocol which will be described in more detail in Section 2.1. Dan Pitt, Executive Director of
the Open Networking Foundation describes OpenFlow as:
"OpenFlow, as a standard, lays the foundation for a new network software discipline, working
towards a high-level language that will make networks as readily programmable as a PC" [1]
There are numerous benefits of SDN for both users and managers of the network. For example,
the network could operate more effectively as the network manager can prioritize certain data
packets in real-time via the SDN controller, thus optimizing data flows and exploiting the
flexibility of SDN to use alternative paths for other traffic. Using these alternative paths
reduces the latency of some of the traffic at the cost in increased latency for other traffic.
Additionally, using these alternative paths distributes the load over a larger number of paths,
thus potentially allowing the network operator to delay scaling up their physical network,
reducing their capital expenditures.
SDN provides an API making the network programmable. This ultimately enables applications
to be aware of the network, and enables the network to be aware of the needs of applications.
Both of these enable improved automation and controlling of the network and traffic flows.
This API improves the use of existing resources and allows greater innovation in the future,
bringing the rate of evolution of networks (and network protocols) closer to the rate of software
development.
Today's data centers are built using a large number of different network devices for routing,
load balancing, switching, etc. Because many companies employ a multiple vendor strategy for
their purchases, the network consists of a heterogeneous collection of devices, i.e., with
different devices produced by different manufacturers. The diversity of devices
increases the complexity of configuration and management because there are generally
vendor specific APIs to take into consideration. This requires either a network management
solution that can deal with each of these different APIs (such as Tail-f Systems Network
Control System [2]) or use of SDN where the configuration is done through a centralized API
and standardized for the whole network.
A major advantage of software realizations of functionality is that this software can be rapidly
deployed to a large number of computers, thus enabling the functionality to scale up or down to
follow demand. Unfortunately, a corollary to this is that a single exploit in the software could
affect a large number of users and their personal & data integrity. An example of such an error
in software, is the programming mistake resulting in the Heartbleed-bug discovered in
OpenSSL. This error allows anyone to access and steal information which should be protected
by encryption technologies (such as SSL/TLS) [3].
One of the most common types of security problems today is denial-of-service attacks (DoS). A
DoS attack denies requests from legitimate users being served by the target of the attack. The
main goal of a DoS attack is to make the victim’s service unavailable, hence the victim will be
unable to provide the expected service to its customers. For example, a DoS attack on a web
server would make the web service unavailable to legitimate user’s browsers. This is achieved
by flooding the web server with more requests that it can handle, thus interrupting and/or
suspending the service provided by this web server. DoS attacks have been carried out for
political, economic, and malicious purposes. For some examples of such attacks see [4] and
[5].
Another security risk that is important to consider is the growing trend of employees bringing
their own devices, such as smartphones, tablets, computers, etc. to their workplace. This policy
is often referred to as Bring Your Own Device (BYOD). Allowing all these personal devices to
connect to the network increases the chances of a device internally infecting the network with
malicious software. Such an infection could lead to illegitimate access to sensitive information
since these devices frequently have access to sensitive company information.
1.1.2 Multi-tenancy
Today multi-tenancy is used more and more together with virtualization. Multi-tenancy means
multiple customers share a single hardware instance, often sharing the same network interface
and storage infrastructure. However, multi-tenancy can cause challenges for service providers
as the different customers are usually isolated by having their processing done in different
virtual machines (VMs) running on a hypervisor. In single-tenancy, customers are each
assigned their own server and storage within a data center. Today many customers who demand
high availability and high security for their operations require a single-tenancy solution.
Moreover, there is always a risk of human error and in a multi-tenant architecture; such an
error could affect multiple users. For example, if an attacker is able to circumvent encryption
for a shared database in a multi-tenant service, then the attacker would be able to access data of
all the particular database instances of these users. Thus a number of different customers could
all be negatively affected at once [6].
Defining our problem began with a case study agreed upon with our industrial supervisors at
TeliaSonera. The problem that was identified via this first case study is: How does a SDN
controller react to a DDoS attack and how to provide security in an SDN environment? The
focus was to be on how to manage and mitigate distributed denial of service (DDoS) attacks on
TeliaSonera’s data center web-applications.
A second case study concerned how to establish trust amongst customers for multi-tenancy
services in an SDN environment, where everything is virtualized and the underlying hardware
is shared. This second case study also examined the major differences in how multi-tenancy is
implemented today in TeliaSonera’s cloud environment and how it could be implemented in a
future SDN environment.
1.3 Purpose
The purpose of this thesis project was to help TeliaSonera understand how they could exploit
the adoption of SDNs in their data centers. The advantages for them as a company are: (1) to
decrease capital expenditures by making better use of their existing infrastructure; (2) increase
their ability to manage the networking resources within their data centers; (3) reduce their
energy and cooling costs, as a smaller pool of servers can be shared in a multi-tenancy
configuration; and (4) make the advantages of their adopting SDN available to their
customers (in terms of improved security, reducing time to market, lower costs via
scalability, and simplifying configuration and management).
As noted above, TeliaSonera’s customers will also gain from the adoption of SDN within the
datacenter. However, it is important that TeliaSonera properly address the security and multi-
tenancy issues so that their customers’ personal and data integrity & privacy can be ensured. If
they are not successful in addressing these issues, then the gains from adopting SDN will be
reduced and they would risk damaging their reputation if customers’ information were to be
leaked or made accessible to unauthorized parties, thus running a risk of losing current and
potential customers.
Interviews with professionals who deal with different layers of the OSI-model were conducted
to gain a broader view of the issues regarding multi-tenancy in the target environment. Their
input provided the backbone of our analysis. Simulation and testing of a SDN network, with a
single- threaded controller was benchmarked over the course of this project. This
benchmarking was done to gain hands-on experience with SDN.
The research methodology chosen for this project was a qualitative research methodology
(paradigm) because of its inductive and postmodernist nature. Consideration of the limited
duration and the overall size of the project, as well as the fact that SDN has not yet been
implemented in TeliaSonera's data centers, and our current knowledge about the field of
interest were also taken into account when selecting this research methodology. Based upon
our initial literature study and our meetings with TeliaSonera our understanding of the issues
evolved, while our objectivity may have been colored by the research and our own work in the
field. We rejected the use of a quantitative methodology, as it would have been a deductive and
a non-value-loaded approach. Instead, we chose a qualitative research methodology approach.
This approach was expected to give qualitative insights into the security issues of introducing a
SDN into TeliaSonera's data centers for use with their web services. However, our study should
be followed up in a future project via a quantitative evaluation after the implementation of
SDN for these services has been completed.
1.5 Delimitations
One of the important limitations of this project was the limited duration of this project. This
meant that this project could only study the potential impact of a future implementation of
SDN in TeliaSonera's data centers - as this implementation has not yet been carried out and
would not be carried out during the period of our thesis project. An additional limitation was
our lack of knowledge concerning SDN, virtualization, and data centers when starting this
project.
Out of scope for this thesis project is whatever happens outside of the data center’s site. In this
project, the focus of our attention is solely on security related issues within a SDN
environment, specifically DoS-attacks and how multi-tenancy is solved within a Software
Defined Data Center (SDDC). Together with TeliaSonera we chose these two security
challenges, because according to them these are the most likely to pose a threat to the
company’s security. Therefore, other potential security related problems were not considered in
this thesis project.
Although VMware does not manufacture network devices, VMware is actively developing
software and services for cloud management and virtualization. VMware’s approach is based upon
their NSX™ software that provides a virtual network and security platform. This software is
distributed in each of the hypervisors running on the computers in the data center. A hypervisor
isolates the VM and applications from the physical server. Applications running in a VM do not
see any difference (other than potentially more limited throughput) between the virtual network
and the underlying physical network. As a result, applications do not require any special
configuration to run on the virtualized network.
The NSX network hypervisor placed between the physical and application layer does not affect the
hardware in any way, facilitating hardware upgrades and exchanges. Whenever a data center
owner starts to run out of computing resources or storage capacity, NSX enables more hardware to
be added to the underlying physical network to provide increased scalability.
The main driver for SDN is the emergence of cloud services. From a Juniper Networks’ point of
view they say, “Software defined networking is designed to merge the network into the age of the
cloud”. As a network equipment vendor Juniper Networks’ approach to SDN is obviously quite
different from that of VMware. Along with VMware and Cisco, Juniper Networks has also
identified the data center as an environment ripe for SDN implementation. Juniper is adapting to
this by shifting towards selling network equipment, but licensing their software separately, rather
than selling the network equipment with software as of today [7].
Cisco’s SDN solution is the Cisco Open Network Environment (ONE) architecture. This
architecture is expected to help networks to become more open and programmable. ONE builds on
a protocol called OpenFlow. Section 2.1 describes the OpenFlow protocol in detail.
OpenFlow operates on layer 2 (the data link layer) in the OSI-model. An OpenFlow-switch
consists of flow table(s) and a group table. This information is used when forwarding frames (see
Figure 2-1). An OpenFlow device utilizes a secure channel to communicate with the SDN
controller. Through this secure channel, packets (i.e. containing an Ethernet frame - including
frame header and payload) and commands are sent using the OpenFlow protocol. The flow table
consists of a set of flow entries. Each flow entity consists of match fields, counters, and
instructions. Frames arriving at the switch are compared with the flow entries in the flow tables
and if there is a match, then the set of instructions in that flow entry will be executed. The frame
might be directed to another of the switch’s flow tables for further processing. This procedure is
called pipeline processing. When the instructions do not (re-)direct the frame anymore the pipeline
processing has finished and the action set associated with the frame is executed. This execution
usually forwards the frame on some interface.
There are 15 fields (shown in Figure 2-2) used when matching packets aga inst flow
entries. The flow entry could be more or less specified to control the network - depending
on how the fields are set. If all fields are set, then the flow entry would be very specific
and cover only a narrower range of possible frames. Creating flow rules covering a wider
range of incoming fram es is also possible by setting the “don’t-care-bits”; this is
beneficial when there is a need to limit the number of flow rules being created.
Figure 2-2: The fields are used when matching packets with flow
entriesaccording to t he OpenFlow Switch
Specification version 1.1.0 Implemented [16].
If the frame does not match any flow entry, i.e. a table miss occurs, then th e frame is sentto the
SDN controller over a secure channel by including the packet-in message. Depending on the
configuration of the switch, another alternative would be to drop the packet. Ho wever, by default
the frame is usually sent to the SDN controller via a packet-in message to ask the SD N controller
to make a decision about how to handle this specific frame. A packet-in contains eithe r a part of
the frame header which by default is set to the first 128 bytes or the entire frame depen ding on
whether the switch supports internal buffering or not. If the frame can be buffered, i.e. temp orarily
stored within the switch, then a buffer ID will be included with the packet-in message. The S DN
controller should send a packet-out and/or a flow-modification message. The packet-out message
sends the frame out a specified port on the original switch. This packet-out message must contain a
list of actions; otherwise the frame will be dropped. If the entire frame was not sent from the
switch to the controller, then the buffer ID will be referenced by the controller in the packet-out
message. Using this buffer ID, the original frame that triggered the corresponding packet-in will
be forwarded specified port by the switch via the Alternatively, the controller might send a flow-
modification message. The main purpose of this message is to add, delete, or modify the tables in
the switch. In the case of an incoming frame that lead to a table-miss, the controller can send a
flow-modification message to instruct the switch to add a new flow entry in its flow tables so that
this switch will know what to do with similar frame in the future.
Each flow entry contains a timeout value. The idle timeout is a lower limit on the amount of time
an entry will remain in the table, if there is no activity within a certain amount of time then the
flow entry will be removed. The hard timeout is an upper limit, which indicates when the flow
entry must be removed regardless of activity.
Priorities for the matches of entries are also important, thus if there are multiple flow entries that
match an incoming packet, then the flow entry with the highest priority is used.[16] [O13]
2.2 Denial-of-service-attacks
Computer networking has come a long way, but even with today’s advanced network architecture,
there are vulnerabilities. DoS attacks are one of the most common security-related problems of
servers today. A DoS attack can be accomplished by several methods, but most of these attacks
can be categorized into one of three different methods: vulnerability attacks, connection flooding,
and bandwidth flooding.
Vulnerability attacks take advantage of bugs or exploits in the service at the server. In this way,
the service stops functioning and in the worst case, the server hosting the service could crash.
Connection flooding, also called TCP SYN flood attacks, occur when a large number of TCP
connection attempts arrive at the targeted server. The attacker causes these TCP SYN packets to be
sent, either by one source or by many sources*. When a TCP connection is being created, the
client and server exchange messages to establish a TCP connection before they send any data. The
first packet sent by the client has the SYN (synchronization) flag set and an initial sequence
number. The server allocates a TCP control block and sends a SYN-ACK (synchronization-
acknowledgement) back to the client along with the server’s SYN flag sent to indicate that it is
sending its own initial sequence number. The client would normally send an ACK
(acknowledgement) back to server thus establishing the TCP connection. If the last step of the
procedure does not occur, there is a half-open TCP connection. At some point the server will not
be able to establish anymore connections until the half open TCP connections are closed (thus
releasing the storage associated with their TCP control blocks), therefore all new legitimate
connection establishment attempts will be denied.
Bandwidth flooding occurs when a large number of packets are sent (nearly) simultaneously by
the attacker (or by hosts controlled by the attacker) to the targeted host. The target’s incoming link
will be choked (i.e., all of the available bandwidth will be used up) and legitimate usage of the
server becomes constrained. In some cases, one attack machine is insufficient to cause sufficient
damage. For example, such a bandwidth flooding DoS attack would fail when the targeted server
has an access bandwidth much greater than the amount of traffic coming from the attacker. In this
case, a DDoS attack would be used by the attacker. In a DDoS attack the attacker creates a
network, often referred to as a botnet, by infecting multiple computers with viruses or Trojans.
These infected computers are often called zombie computers. The attacker can now have a much
larger impact on the targeted server because it can coordinate multiple zombies to generate traffic
at a much higher aggregate rate. Figure 2-3 shows an attacker and a botnet of zombie computers
performing a DDoS attack on a data center. Moreover, there is a problem detecting DDoS attacks,
as it is not obvious that these multiple sources are in fact intent upon attacking the victim. This is
unlike a regular DoS attack where all of the traffic is coming from a single source. When such as
DoS attack occurs, one could simply block packets from this source. Unfortunately, DDoS attacks
are very common today, although mounting such an attack is considered a crime in many
countries. Note that it is very hard to defend against a DDoS attack, as one cannot easily know
which sources to block. To date there have been 2-3 major DDoS attacks aimed at TeliaSonera’s
data centers
Figure 2-4: Bridging two LANs (or VLANs) via a shared bridge entity on layer 2
2.3.2 Multi-layer Protocol Label Switching
Multi-layer Protocol Label Switching (MPLS) is a multilayer protocol, often referred to as the “2.5
layer protocol” in the OSI-model, i.e. it is in between the traditional layer 3 routing and layer 2
switching layers. MPLS enables a variety of quality of service (QoS) features an d traffic flows
can be allocated to specific label switched paths to best utilize the current network capacity.
2.3.4 FabricPath
Cisco’s FabricPath provides communication between two endpoint VMs. The process starts with a
FabricPath network switch encapsulating an incoming Ethernet frame inside a 16-byte FabricPath
header (as shown in Figure 2-5). When the Ethernet frame is about to exit the FabricPath network
this FabricPath header is removed and the Ethernet frame is forwarded.
The FTag is used to determine the path the frame should take through the FabricPath network.
The FTag has a 10-bit FTag field, hence it has a range from 0 to 1023. This enables the use of
up to 1023 different paths through the network. A given FTag value is unique within a
FabricPath network, thus this value does not change during transmission through the network
(n or do the source and destination field values change). This means that each network device
along the path does not alter the path chosen by the initial edge FabricPath device that
encapsulated the Ethe rnet frame. The TTL- field prevents loops from occurring and this field
is decremented by 1 at every FabricPath device as the frame is forwarded within the FabricPath
network. The subswitch ID is based upon the FTag field value and this determines the path that
a frame travels through the FabricPath network. Notably the subswitch ID and FTag are the
only fields that are altered in the outer MAC. FabricPath uses the IS-IS link-state routing
protocol. [6,8, 9] For Further details see: http://www.packetmischief.ca/2012/04/17/five-
functional-facts-about-fabricpath/.
TeliaSonera uses VLANs to isolate services and applications internally as well as externally to
customers within their data centers. For example, in TeliaSonera Service (TSS) e-mail, there are
various VLANs used within specific areas, such as network area, front end (web), and back end
(application). Additionally, there are specific VLANs for backup and server management.
The IEEE 802.1Q standard defines VLANs on an Ethernet network. This standard describes how
Ethernet frames are tagged with a VLAN tag, as shown in Figure 2-7. This standard also defines
how switches and other network devices process and handle VLAN tagged frames alo ng a path.
The 4 byte VLAN-tag consists of a 16-bit Tag Protocol Identifier (TPID). A 3- bit User Priority
Code Point (PCP), 1 bit Drop Eligible Indicator (DEI) - this bit is also known as the CFI-bit shown
in the figure. These two together with the 12 bit VLAN ID (VID) make up the 16-bit Tag Control
Information- field (TCI). The TPID and TCI are a total of 32-bits long.
The TPID field is by default set to 0x8100 when an Ethernet frame is VLAN t agged. PCP
indicates the priority of the frame, with a system of different priority levels ranging from 0 to 7 (as
there are 3- bits). Traffic is categorized into voice, video, audio, data, etc. and an appropriate PCI
value set. DEI is a one bit field set to 0 or 1, where 1 indicates that this frame is drop eligible, i.e.,
that this frame could be dropped t in the event of traffic congestion. For example, a VoIP frame
cont aining audio or video data might be marked as drop-eligible, indicating that it could be
dropped if there is congestion.
IEEE 802.1ad extends the standard VLAN tagging standardized via stacked VLANs. The idea is to
stack multiple VLAN headers on top of each, and that way expand the numbe r of available unique
VLANs. No extra functionality is added, other than the obvious increase in the number of
available VLANs.
2.4.2 VXLAN
VXLAN is a layer 2 overlay on a layer 3 network. This allows VMs on different IP networks to
operate as if they were connected to the same layer 2 network. In the VXLAN-tag, t he VXLAN
Network Identifier (VNI) is 24 bits allowing for a maximum of 16 million (i.e. 224) unique tags
within a network domain. Each individual layer 2 overlay is called a VXLAN segment. Only VMs
within the same VXLAN segment can communicate with one another. The communicating VM s
must use same VNI and the same VLAN-id. Whenever a VM generates and sends a frame,
VXLAN encapsulation takes place at the physical host machine’s hypervisor or possibly at a
switch. As a result, the VM does n ot need any specific VXLAN configuration and it is unaware of
the underlying VXLAN segment, h ence the VM simply inserts a destination MAC addresses as
usual (using either IPv4 ARP or IPv 6 neighbor discover mechanisms). The encapsulation and de-
encapsulation process occurs at a so-c alled VXLAN Tunnel End Point (VTEP). A VTEP is an
endpoint and can realized through hardware or software. It is called a tunnel endpoint because the
encapsulation is in effect f om the sender VTEP to the receiver VTEP – thus realizing a “VXLAN
tunnel” between the two end points.
Figure 2-8 illustrates the encapsulation and the added components inserted by the sender VTEP.
Since a VXLAN is an overlay running on top of a layer 3 network it requires both IP source and
destination addresses and MAC source and destination addresses for the phy sical interface of the
destination interface. Note that the inner destination MAC address specifies which VM on the
physical host that the packet is intended for. The VTEP does VNI lookups to see if the
communication is within the same segment or not. The VTEP also performs encapsulate or de-
encapsulate as necessary, hence the VM is unaware of the underlying transportation means. The
(receiving) V TEPs also stores the corresponding IP and MAC address mapping information in a
table, so that a response packet does not need to be flooded throughout the network.[10]
There are two types of different server virtualization models (shown in Figure 2-9): bare-metal
(also called “native”) and hosted. TeliaSonera servers runs an OS, most commonly Red Hat
Enterprise Linux (RHEL) or Microsoft’s Windows Server. On top of this OS a hypervisor software
is running, through this hypervisor VMs are created. The hypervisor also manages and operates the
VMs running on top of it, ensure they get the computing and memory resources they need to
operate. These resources can be specified through different SLAs with customers. Amazon’s
Elastic Compute Cloud is an example of another company that provides VMs to external
customers with specific SLAs via their own data centers. The SLAs specify that certain levels of
computing and memory resources are to be allocated for the customer. These resources are
specified to and enforced via the hypervisor. The customer pays for the resources that are
dynamically allocated for them at a price level related to their SLA.
Each of these two alternative virtualization stacks has an OS that actually realizes the VM. On top
of this OS the customer specific applications are run. The difference between these two models is
whether the hypervisor runs on another OS (referred to as the hosted server virtualization model)
or whether the hypervisor runs directly on the hardware (referred to as a native-hypervisor stack).
The native hypervisor mode is generally thought to be faster and more scalable.
The left-hand figure in Figure 2-10 illustrates a non-virtualized model (i.e., a traditional model)
with three physical servers that are 5% to 30% utilized. Figure 2-10 shows two server
models, virtualized (multi-tenant) and a traditional single-tenant approach. The three physical
servers have been replaced by one physical server running three VMs in the virtualized modell.
The total utilization of the server capacity therefore is 55% (the grand total for the three
physical servers in the non-virtualized model). As a result of using this virtualized model, the other
two physical servers can now be used to support other applications or if there are no other
applications they can be powered down to save energy.
In a data center, such as TeliaSonera’s DDCs, VMs are running on identical racks of server
hardware as well as on a homogenous physical network. Today isolation is mainly done through
VLANs, with VLAN tags unique to a specific customer. This has become a problem due to the
VLAN- tag field only being 12 bits, as this limits a data center to a maximum of 4096 unique
VLANs. Amazon has solved this problem by means of VLAN stacking (as described earlier in
Section 2.4.1).
IaaS, also referred to as Hardware-as-a-Service (HaaS), is the backbone fo r every type of cloud
environment. It consists of networking equipment, storage devices, and computing resources. In
this
model the provider owns maintains, and operates the infrastructure for their customers. The
customer pays the cloud provider to run an operating system of their own choice along with their
choice of applications on top of the underlying hardware provided by the provider. An IaaS service
provider can provide different server virtualization stacks. TeliaSonera is an IaaS -provider.
PaaS refers to a model where the cloud service provider provides a platform complete with one or
more tools and different programming languages for their customers to build their own
applications. These applications are then delivered to users. Barium and TeliaSonera are both
PaaS-providers.
In the SaaS service model the customer of the cloud service provider controls only application
specific settings in the user interface. As a result, the cloud service provider must create and run a
specific service that is made available to end customers. Facebook is a SaaS provider.
2.6 Cloud Environments
There are different approaches to cloud deployment. In this section, we will de scribe three of
these: public cloud, private cloud, and hybrid cloud.
A private cloud is best suited for large companies with the competence to operate and maintain
a data center. These companies often have legacy applications that are incompatible with cloud
environments provided by IaaS providers. Additionally, many companies do not want to make
certain of their data available to others. Storing data in your own private cloud means that your
control over this data is guaranteed and that all the security processes are visible.
A public cloud is the dual of a private cloud. A public cloud deployment is best suited for small
companies without the capital or competence to operate and maintain their own data center. By
using a public cloud, they rent computing and storage from a cloud provider.
A hybrid cloud combines a company operated and maintained private cloud (that internally
offers some resources) with an IaaS (that provides some external resources). For example,
storage could be spread over the different cloud environments with the location of specific da
ta determined by the required level of secrecy and the availability of sufficient resources.
Typical legacy applications are best suited for the private part of the cloud. Moreover, using
cloud computing resources enables data and processing to effectively share and transfer
utilization of different resources amongst the different cloud entities, thus allowing a company
to avoid sharing critical business applications and sensitive data with a third-party – while
gaining the advantages of cloud computing for other part of their operations.
Figure 2-11 illustrates the idea of a hybrid cloud. The cloud orchestrator is not necessarily an
application with a complete overview of the different cloud entities and resources, but rather a
set of rules for data, computing, and application migration between the clouds. For ex ample, if
public cloud 1 goes down, certain data can be migrated to public cloud 2 and specific
applications can be automatically started on public cloud 2 and the private cloud. Another
interesting feature of a hybrid cloud is the heterogeneous nature of the cloud computing
environment that it offers, thus allowing a company to have different cloud entities for
specialized tasks. For example, public cloud 1 could be optimized for performance, while the
private cloud is optimized for security and authentication for certain applications. A solution
with an application running on the cloud public 1 could retrieve customer authentication
service, database access tokens, etc. from the private cloud. Also cloud environments can
extend the reach of a company by providing increased local bandwidth and reduced delay for
users in a given geographical region and can provide geographical diversity (avoiding all of the
company’s records being lost or inaccessible due to a network partition, flood, fire, etc.).
Although there are some disadvantages to heterogeneity, such as increased complexity of
configuration when integrating multiple cloud environments, especially when combining
different IaaS’s and SaaSs. [12]
Figure 2-11: An overview of the hybrid cloud definition
A problem with hybrid cloud deployment is the inevitable diversity of the underlying
hardware infrastructure and software. Getting all of these components to interact with one
another without difficulties depends on a lot of initial manual configuration. The goal of
hybrid cloud computing is to mesh the multiple cloud environments together, despite their
differences – this requires cloud orchestration. It is important to point out that this
heterogeneity is also the strength of a hybrid cloud deployment strategy.[13]
We have chosen to include the two following studies, the major related work done by Mr.
Shin and Gu we found very enlightening and correlated to some bits with our own study,
thus being relevant to include. Also the minor work where vulnerabilities within
OpenFlow is being discussed.
After agreeing upon SDN as the topic for this thesis, we started our literature study to
gain an overview of the area. This literature study lasted for a couple of weeks.
The focus then shifted towards narrowing the subject of our research, this unfortunately
took longer than we first anticipated. Finally, with the subject and research question in
place we started discussions with TeliaSonera based on our general view of SDN
(obtained via our literature study).
Because our research process was solely based on the qualitative research methodology
approach, as we described in Section 1.4, the work process proceeded as follows:
1. Identifying a question - How well does SDN co-operate with TeliaSonera? And future
security demands in a rapidly shifting industry?
2. Review literature - What has been done in the field of interest? Similar studies?
3. Purpose of the study/question - The main purpose of this study is to give TeliaSonera
insight in lower level network virtualization in terms of security.
4. Define the problem area - What concepts and terms are today associated with SDN
and related security issues?
5. Target group - TeliaSonera and employees.
6. Road map - Specifies who will participate in this thesis project, and also how, when,
and where insight and content will be gathered and produced during this project.
7. Analyze literature - Analyzing the content we have gathered along the way and sum it
up in text.
• VirtualBox
• Cbench
• Mininet
• POX
VirtualBox was used for the virtualization. We set up a VM, running Mininet on top of an
Ubuntu server. Cbench is a program for benchmarking OpenFlow controllers. The
program works by generating packet-in messages. Cbench also has the capability to
emulate switches and hosts connected to a controller. The number of switches and hosts
are specified as argument when running Cbench. The output from Cbench could be used
to compare the performance of different controllers. The controller that was used in this
experiment was POX, a single-threaded controller implemented in the Python
programming language. Mininet was used for emulating a network consisting of virtual
hosts, switches, and controllers.
We accessed the Mininet VM that we created through VirtualBox via SSH. In one
terminal window we started the POX controller and in another window we ran Cbench
altering the number of switches and hosts in order to benchmark different scenarios.
We will also investigate how TeliaSonera could work towards increasing the level of
confidence and trust in multi-tenancy solutions with a future implementation of SDN
within their data centers. In this particular case study, we have discussed the topic with
our tutors and TeliaSonera employees (mainly network architects). The analysis of this
issue is based on our own academic thoughts gathered from the literature studies and
feedback & support from our discussions with TeliaSonera.
This realization was confirmed when discussing the issue with sales representatives from
the company F5 Networks. They provided us with a non-TeliaSonera view of the issue
that helped us to look at it from another point of view. This also made us more aware of
the hollowness of the original problem statement. We contacted our tutor, Daniel
Alfredsson an IT- and networks architect, for his input regarding the issue. He confirmed
what we suspected; there would be no major difference in terms of security regarding
DoS-attacks on TeliaSonera’s data centers. Due to this, we felt we had reached a dead end
for further analysis of this particular issue as the focus of our thesis project was on
TeliaSonera’s data centers. A further analysis of this issue would be out of our scope,
since DoS- attacks are primarily mitigated outside of the data center’s network. However,
one should note that analysis of the traffic within the data center could be used to improve
the detection of DoS attacks.
Figure 4-1 illustrates how illegitimate traffic from DDoS-attacks from the Internet are
mitigated and dropped in TeliaSonera’s networks today. TeliaSonera Internal Network
(TSIN) is the network within the data center (DC) site. TeliaNet is TeliaSonera’s access
point to the internet for customers as well as internal traffic. A DoS-protection system
provided by Arbor Networks is deployed within TeliaNet. This DoS-protection system
works by collecting statistics and checking whether the traffic matches certain patterns,
given this information the system determines if certain traffic flows are legitimate or
illegitimate (i.e., coming from a DoS-attack). When an illegitimate traffic flow is
detected, the DoS-protection system reroutes these illegitimate packets and drops them
off elsewhere. Deploying the DoS-protection outside of the data center minimizes the
probability of links to the data center becoming clogged with illegitimate traffic. This
placement of this solution outside of the data center means that it is outside the scope of
our thesis project, since our focus was on security issues of the traffic within the data
center space.
Figure 4-1: How a DDoS attack from the Internet is mitigated by the network
outside of the data center (DC)
However, consider DoS-attacks from another perspective – specifically how could one
potentially circumvent the security of a SDN network within the data center. As described
i n Section 2.2 a DoS- attack aims to consume all resources provided by the targeted
service in order to lead to a breakdown of this service and/or deny legitimate users access
to this service. Our academic supervisor pointed out that in an SDN environment based
on OpenFlow, new types of attacking are possible because the control plane is separated
from the data plane and moved to a centralized point, i.e., the SDN controller, thus there
must be communication between these two planes (via the OpenFlow protocol). For
example, this communication occurs when the data plane of a switch receives a frame that
is not recognized, then the switch has to contact the SDN controller. The controller then
makes a decision about what to do with this frame, either drop the frame or add a new
flow entry to the flow table in the switch so that the next time a similar frame arrives the
switch would know how to forward it. By exploiting this function of OpenFlow there is
potential for DoS-attacks that target the SDN controller (much as the attack described in
Section 2.7.1 did) and to target the data plan e’s bandwidth to and from this controller.
The procedure of making flow requests (also packet-in s) and modifications
consumes resources of the control plane. Thus after a certain number of requests within a
short period of time the SDN controller would be flooded and hence would have
difficulties to handle all of these requests. The data plane would also be affected as all
previous flow requests sent to the controller and processed have produced new flow rules
and actions (flow modifications and packet-outs) leading to an increased consumption of
the resources of the data plane in both the network and in the switch
itself. Placing all of these bogus flow entries into the flow table could also hinder normal
traffic flows, especially if the attacker can generate enough new flow table entries to force
entries for valid ongoing traffic to be flushed out of the flow table.
Figure 4-2 illustrates an example of the process of consuming all resource s (hence
mounting a DoS-attack) of an OpenFlow enabled switch by sending appropriately formed
frames. In this case, we assume that the flow table can store up to 10 000 different flow
rules. Each pack et sent from the host (10.0.0.1) to the destination IP address (20.0.0.1)
had a different destination port in order to make the switch ask the controller for a new
flow rule for each frame. This could lead to an excessive number of packet-in messages
being sent by the switch, and an excessive number of flow-modification messages being
sent back by the controller, causing flooding of both the control- and da ta planes. All of
this depends upon whether new flow entries will have to be created with the only
difference being the destination port. However, it is often more complicated than this. A
combination of altered match fields might be needed for a packet-in message to be
generated. In our simulation we will show the effects on the controller of frames coming
from multiple unique MAC-addresses.
The most important points we learned from our discussions with TeliaSonera a concerned:
• SLAs
• Automatization, and
• Redundancy both path- and data wise
Of those three, we think the SLA is the most important point, because an SLA defines
exactly what the customer expects from the service provider. An SLA codifies the
customer’s requirements for computing and storage capacity as well as the required
security and overall the experience the customer is paying for. The SLA could for
example include a specification of the minimum uptime, scalability, availability, etc.
Finding appropriate metrics to rate and com pare the statistical measurements of a
service is important. Unfortunately, unlike metrics for computing and storage capacity,
today there are no comparable metrics for availability and uptime. Currently the cloud
providers themselves collect their own data - which is not necessarily incorrect, but is
definitely biased. This leaves customers wondering if they are getting the services that
they are paying for according to their SLA.
This bringing us to the need for redundancy of all of the different types of resources, as a
customer of a cloud service wants constant accessibility to their data and services.
However, this is counter to the effort by cloud providers to adopt a lean production
approach to preserve value while minimizing work (and costs). Ensuring availability of
cloud services requires both automatization and scheduled maintenance & updates of
software during times when the data center and these services are lightly loaded. Logging
and sniffing traffic flows is important due to the fact that cloud environments have
allowed providers to over-subscribe their services (i.e., they have accepted more
subscriptions than they have simultaneous resources for), which would not have been
possible with a single-tenant deployment.
Another interesting topic arose during discussions with TeliaSonera, but was left out of
the previous list of points. This topic concerns what to do when a customer wants to move
from one cloud provider to another. With cloud computing being a relatively new concept
there are currently no standardized way of doing a migration of functionality, e.g., an
application. Migration of storage on the other hand is easier, although there are no cloud
provided tools for such data migration today. This issue was investigated by Vytautas
Zapolskas in his thesis on “Securing Cloud Storage Service”[23], where he discusses
possibilities for data migration between different cloud providers. Full storage migration
is best done today by using personal storage or a FTP-server to act as an intermediary
during transmission between different cloud environments. Unfortunately, because of
different underlying software and supporting libraries, certain applications are not suitable
for every cloud environment. This also raises the question of whether after a successful
migration; can a customer ensure that their data is properly deleted from the earlier cloud
provider’s infrastructure? This typically would be solved via an agreement where
questions and certain grey zones are specified. However, a more secure approach is to
keep the data only in encrypted format – then change the keys when the data is migrated.
TeliaSonera has the opinion that a certain number of their network services fit the cloud
well, while some services are better suited for a more traditional, single-instance
approach. Additionally, there are customers who demand very high security solutions,
such as defense industries, government agencies, etc. From the customer’s point of view
there is the possibility to adopt a hybrid or private cloud deployment to ensure that certain
data is not shared with a third-party – but this requires retaining appropriate resources and
competence in-house (and both come at a high price).
Switches as differentiator
Figure 4-3: Average responses per second for 10, 100, and 1 000 switches with
a static 1 000 hosts
We quickly noticed the significant drop in rps in the range from 100 to 1 000 switches.
We suspected that in the range from 100 to 1 000 switches some switches were probably
not able to transmit packet-ins to the controller, due to an overload of the controller. In the
interval in which the drop was largest (i.e., between 100 and 1 000), we introduced a
fourth repetition with 500 switches. As shown in Figure 4-4 when benchmarking using
500 switches, this configuration had a better average rps than when using 1 000 switches,
the controller could also not handle the packet-ins even for 500 switches. As future
work it would be interesting to introduce intermediate numbers of switches between
100 and 500 to see just where the behavior changes.
The benchmarking results, showing the exact numbers and min/max values for responses
per second are shown below with terminal input and output for the four repetitions of the
test when varying the number of switches:
cbench –c localhost –p 6633 –m 10000 –l 10 –s 10 –M 1000 –t
RESULT: 10 switches 10 tests min/max/avg/stdev =
8652.12/8882.30/8813.28/80.07 responses/s
cbench –c localhost –p 6633 –m 10000 –l 10 –s 100 –M 1000 –t
RESULT: 100 switches 10 tests min/max/avg/stdev =
5909.37/7118.32/6793.67/378.02 responses/s
cbench –c localhost –p 6633 –m 10000 –l 10 –s 500 –M 1000 –t
RESULT: 500 switches 10 tests min/max/avg/stdev =
0.00/3623.29/1229.18/1373.87 responses/s
cbench –c localhost –p 6633 –m 10000 –l 10 –s 1000 –M 1000 –t
RESULT: 1000 switches 10 tests min/max/avg/stdev =
0.00/1869.84/207.76/587.63 responses/s
Switches as differentiator
Figure 4-4: Switches being the differentiator showing the results for 10, 100,
500, and 1 000.
MAC as differentiator
Figure 4-5: Hosts being the differentiator showing the results for the number
of hosts being; 10 000, 100 000, and 1 000 000 on a set of 10 switches
Should we have done the same for 10 000 000 and more hosts the linear decrease is
expected to continue if all parameters are the same. If more switches are added in
combination with increasing number of hosts, performance is likely to drop even
quicker as shown in the previous section regarding switches as the differentiator.
The terminal input and output was:
5.1 Conclusions
In this section, we present our conclusions on the work we have done over the course
of this thesis project. The conclusions are gathered from Chapter 4, where we analyzed
and presented data and ideas regarding the problem definition we described in Section
1.2.
The idea of attacking the SDN controller by flooding the control- and data plane with
unique and new flow modification messages (i.e. introducing a large number of new
flow rules) could be mitigated by adopting an SDN policy that overrides the decision
to create new packet-in messages by replacing them with wider match fields. Widening
the match fields will results in fewer packet-in messages being created by the data
plane, while still forwarding traffic. However, these wider rules may lead to sub-
optimal forwarding decisions.
Standardization is of great importance for all the points regarding SLAs and
automatization in the future, as mentioned in Section 2.6. Standards that are widely
used throughout the industry and standards with metrics for providers to follow will
enable both cloud providers and customers to compare their expected performance
with actually delivered performance. These metrics will make it easier for customers to
compare cloud providers and to make their decisions.
As of today, the metrics regarding computing and storage resources are easy to
understand, i.e., the capacity of the CPU and the amount of available storage space.
However, there are no standard metrics for comparison of accessibility and uptime for
cloud environments, other than the service provider’s own data, which is not always
that informative or unbiased.
5.2 Limitations
The psychological aspect of gaining trust is hard to anticipate in a customer segment
without performing large scale surveys of these customers. We did not conduct a larger
scale survey, instead we arranged meetings via TeliaSonera with representatives of
several large networking companies (F5
Networks, Cisco Systems, and VMware) to gain qualitative insight. We did also
participated in a smaller networking conference at Stockholm Water Front where
we had the chance to speak to
representatives from Arista Networks and Nuage Networks (Cisco, VMware, and F5
were also represented at this conference). While providing us with great class insight
into the problems of lower level virtualization and the related trust issues, we did not
gain a broad opinion and or deeper insight
into the matter, but rather found a very industry specific vision. This is a limitation
when trying to show results concerning the trust issues related to lower level
virtualization and SDN combined with multi-tenancy.
Since SDN is a relatively new technology, the chances of getting to experiment with it
were limited during the time of the project. In addition, the total duration for the thesis
caused us consider right from the start whether to try to set up an environment for
experimenting or not. We decided not to configure such an experimental environment
in order to conduct an experiment due to the limited time frame and what we initially
perceived as the lack of the necessary network equipment needed for such
experiments.
A broader survey to gain overall insight into the trust issues related to SDN in a multi-
tenancy environment should be further explored.
5.4 Reflections
We have encountered different aspects and reflections during our thesis project. In the
numerous discussions with TeliaSonera and overall the business side of the project we
have discussed economics and possibilities of lowering costs, in accordance with the
aim of this thesis project. With a future implementation of SDN in TeliaSonera’s data
centers, TeliaSonera’s representatives have high hopes of lowering operating and
capital expenditures. This would lead to standardizing network equipment to a higher
degree than is possible today, easier configuration/management, and further
exploitation of pooling (in order to make better use of available resources). What is
important to remember is that these desires are the source of the overall drive for SDN
within the industry – all in the interest of lowering capital and operating costs.
Ethical aspects of our thesis project are primarily linked to the trust issues
regarding the multi-tenancy issues that we have investigated. Throughout this thesis,
we have taken this aspect into consideration and placed a strong emphasis on it since
we feel that this is important for net neutrality and the preservation of user integrity
online.
Social and environmental aspects of our work are more difficult to point out. However,
gains on the environmental side are quite obvious since SDN allows increased
virtualization, leading to substantial gains since hardware will be more standardized
and networking devices can be virtualized
– both potentially increasing the useful lifetime of this equipment (further minimizing
the amount of electronic waste created in a period of time). Additionally, as noted
earlier in the thesis virtualization
enables multiple VMs to be run on the same hardware, thus allowing other hardware to
be powered off – thus saving electrical energy and reducing cooling requirements. The
resulting higher utilization of the underlying hardware, while meeting the
customer’s needs, also minimizes the amount of
hardware that is actually needed – providing both economic and environmental
benefits.
Socially there is a possibility for downsizing the amount of personnel working hands-on
with data centers, thus lowering the operating expenditures we mentioned earlier and
enabling people to be deployed elsewhere in the organization – perhaps in more socially
(and economically) relevant ways.
REFERENCES
[1] Dan Pitt, ‘Trust in the cloud: the role of SDN’, Netw. Secur., vol. 2013, no. 3,
pp. 5–6, Mar. 2013. DOI: 10.1016/S1353-4858(13)70039-4
[2] Denys Knertser and Victor Tsarinenko, ‘Network Device Discovery’, Master’s
thesis, KTH Royal Institute of Technology, School of Information and Communication
Technology, Kista, Stockholm, Sweden, 2013 [Online]. Available:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123509
[3] Codenomicon, ‘Heartbleed Bug’, 29-Apr-2014. [Online]. Available:
http://heartbleed.com/. [Accessed: 01-Oct-2014]
[4] Linh Vu Hong, ‘DNS Traffic Analysis for Network-based Malware Detection’,
Master’s thesis, KTH Royal Institute of Technology, School of Information and
Communication
Technology, Kista, Stockholm, Sweden, 2012 [Online]. Available:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93842
[5] Emmanouil Karamanos, ‘Investigation of home router security’, Masters
Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2010 [Online].
Available: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91107
[6] Eric Lai, ‘Multitenancy & Cloud Computing Platforms: Four Big Problems’,
Übertech,
15-Feb-2012. [Online]. Available: http://www.zdnet.com/blog/sap/multitenancy-and-
cloud-computing-platforms-four-big-problems/2559. [Accessed: 01-Oct-2014]
[7] Stacey Higginbotham, ‘Software-defined networking forces Juniper’s big shift
— Tech News and Analysis’, GIGACOM, 15-Jan-2013. [Online]. Available:
https://gigaom.com/2013/01/15/software-defined-networking-forces-junipers-big-
shift/. [Accessed: 01-Oct-2014]
[8] H. Gredler and W. Goralski, The complete IS-IS routing protocol. London:
Springer,
2005.
[9] Cisco Systems, Inc., ‘FabricPath Switching [Cisco Nexus 7000 Series
Switches]’,
FabricPath Switching, 25-Oct-2010. [Online]. Available:
http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-
os/fabricpath/configuration/guide/fp_switching.html. [Accessed: 01-Oct-2014]
[10] M. Mahalingam, D. Dutt, K. Duda, P. Agarwal, L. Kreeger, T. Sridhar, M.
Bursell, and C. Wright, ‘Virtual eXtensible Local Area Network (VXLAN): A
Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks’,
Internet Req.
Comments, vol. RFC 7348 (Informational), Aug. 2014 [Online]. Available:
http://www.rfc-editor.org/rfc/rfc7348.txt
[11] Cloud Computing Competence Center for Security, Fraunhofer Research
Institution for
Applied and Integrated Security (AISEC), ‘What are Service Models in Cloud
Computing?’. [Online]. Available: http://www.cloud-competence-
center.com/understanding/cloud-computing-service-models/. [Accessed: 01-Oct-
2014]
[12] Eze Castle Integration, ‘Public and Private Clouds Explained’, Public, Private
& Hybrid Clouds, 2013. [Online]. Available: http://www.eci.com/cloudforum/private-
cloud- explained.html. [Accessed: 01-Oct-2014]
[13] Margaret Rouse, ‘What is hybrid cloud? - Definition from WhatIs.com’, 2013.
[Online]. Available: http://searchcloudcomputing.techtarget.com/definition/hybrid-
cloud. [Accessed: 01-Oct-2014]
[14] Cisco Systems, Inc., ‘Software-Defined Networking: Why We Like It and How
We Are
Building On It’, Cisco Systems, Inc., White Paper, 2013.
[15] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J.
Rexford, S.
Shenker, and J. Turner, ‘OpenFlow: Enabling Innovation in Campus Networks’,
SIGCOMM Comput Commun Rev, vol. 38, no. 2, pp. 69–74, Mar. 2008. DOI:
10.1145/1355734.1355746
[16] Open Networking Foundation, OpenFlow Switch Specification, Version 1.4.0
(Wire
Protocol 0x05). Open Networking Foundation, 2013 [Online]. Available:
https://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-
specifications/openflow/openflow-spec-v1.4.0.pdf
[17] J. F. Kurose and K. W. Ross, Computer networking: a top-down approach, 5th
ed.
Boston: Addison-Wesley, 2010.
[18] S. Shin and G. Gu, ‘Attacking Software-defined Networks: A First Feasibility
Study’, in
Proceedings of the Second ACM SIGCOMM Workshop on Hot Topics in Software
Defined Networking, New York, NY, USA, New York, NY, USA: ACM, 2013, pp.
165–
166 [Online]. DOI: 10.1145/2491185.2491220
[19] Varun Tiwari, Rushit Parekh, and Vishal Patel, ‘A Survey on Vulnerabilities of
Openflow Network and its Impact on SDN/Openflow Controller’, World Acad. J. Eng.
Sci., vol. 1, no. 1005, pp. 1005–1 to 1005–5, 2014.
[20] Seungwon Shin, ‘Software Defined Networking Security: Security for SDN and
Security
with SDN’, 17-Jun-2013 [Online]. Available:
http://www.krnet.or.kr/board/include/download.php?no=1751&db=dprogram&fileno
=2
[21] Seungwon Shin, ‘Software Defined Networking Security: Security for SDN and
Security with SDN’, 11-Apr-2014 [Online]. Available:
http://www.kics.or.kr/storage/mailing/20140411/140411_095144993.pdf
[22] Keith Barker, ‘Difference between QinQ and VLAN stacking - The Cisco
Learning Network’, The Cisco Learning Network, 17-Apr-2010. [Online]. Available:
https://learningnetwork.cisco.com/thread/12500#62026. [Accessed: 01-Oct-2014]
[23] Vytautas Zapolskas, ‘Securing Cloud Storage Service’, Master’s thesis, KTH
Royal
Institute of Technology, School of Information and Communication Technology (ICT),
Stockholm, Sweden, 2012 [Online]. Available: http://kth.diva-
portal.org/smash/record.jsf?pid=diva2:538638
[24] Sixto Ortiz Jr., ‘Processor - Dual Data Centers’, Processor: Products, News &
Information Data Center Can Trust, vol. 26, no. 20, 14-May-2014 [Online]. Available:
http://www.processor.com/article/6101/dual-data-centers. [Accessed: 01-Oct-2014]
[25] D. Oran, ‘OSI IS-IS Intra-domain Routing Protocol’, Internet Req. Comments,
vol. RFC
1142 (Informational), Feb. 1990 [Online]. Available: http://www.rfc-
editor.org/rfc/rfc1142.txt