VXLAN by Anand Nande
VXLAN by Anand Nande
VXLAN by Anand Nande
By Anand Nande
AGENDA
○ What is VXLAN ?
○ Why VXLAN ?
○ How does it work ?
○ So now we can migrate VMs across subnets?
○ What about routing across VXLANs?
○ Any Performance Impact?
○ Demo
What is it ?
Virtual eXtensible Local Area Network
● Tunneling protocol co-developed by a group of companies.
Hypervisor 1 Hypervisor 2
10.0.0.1 10.0.0.2
VM2-1
VM1-1 172.16.1.0/24 VNI=10
VTEP-1 VTEP-2
IP=172.16.1.12
192.168.1.0/24 VNI=20
IP=172.16.1.10 MAC=52:54:00:30:de:e3
MAC=52:54:00:0e:08:b3 VNI=10
VNI=10
L3 Network
52:54:00:a0:1b:bb 10 192.168.122.186
52:54:00:8a:bd:ff 10 192.168.122.101
VTEP-1s table
MAC VNI ID REMOTE VTEP
52:54:00:60:18:f9 10 192.168.122.141
52:54:00:8a:bd:ff 10 192.168.122.101
52:54:00:a0:1b:bb 10 192.168.122.186
VTEP-2s table
52:54:00:60:18:f9 10 192.168.122.141
packet
capture
on one
of the
VTEP’s
Additional
Wireshark
plugin
required to
analyse the
UDP data here
So now can we migrate
VMs across subnets?
..the answer is ‘YES - we can’
Whats the most important thing when we migrate a VM ?
- VM’s ip and mac addresses should not change even if the VM has been
migrated to a new hypervisor.
- We use the reference controller which learns the new VM-placement using
ovsDB.
# ps -ef | grep controller | grep -v color
root 1396 1325 0 Feb17 pts/3 00:00:01 controller -v
ptcp:6633
Hypervisor 1 Hypervisor 2
10.0.0.1 10.0.0.2
VM2-1
Flows related to 172.16.1.0/24 VNI=10
vm1-1 removed on VTEP-1 VTEP-2
VTEP-1 and added to IP=172.16.1.12
VTEP-2 MAC=52:54:00:30:de:e3
VNI=10
[VXLAN-to-VLAN]
[VLAN-to-VXLAN]
- vlan to router
- also known as ‘vxlan-bridging’ that extends
the l2-domain over a vast l3-network. - hardware VTEP(router) only does bridging
● The added flexibility comes at a cost, due to the additional processing overhead for encapsulation
and de-encapsulation of packets. This consumes both CPU resources and degrades network
performance, especially for high speed connections.
● By introducing hardware offloading capabilities that can be found in some of today’s modern NICs,
the added overhead for packet processing can be offloaded to the NIC hardware, resulting in
improved CPU utilization and higher throughput.