With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.
As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.
CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.
This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.
KubeCon schedule link: http://sched.co/4VAo
Report
Share
Report
Share
1 of 20
Download to read offline
More Related Content
Container Network Interface: Network Plugins for Kubernetes and beyond
3. How to network containers together?
- Cloud provider integration
- AWS
- GCE
4. How to network containers together?
linux-bridge
macvlan
ipvlan
Open vSwitch
Weave
Project Calico
flannel
5. How to allocate IP addresses?
- From a fixed block on a host
- DHCP
- IPAM system backed by SQL database
- SDN assigned: e.g. Weave
6. How do you mix and match?
(macvlan | ipvlan) + (DHCP | host-local)
7. Order matters!
- macvlan + DHCP
○ Create macvlan device
○ Use the device to DHCP
○ Configure device with allocated IP
- Routed + IPAM
○ Ask IPAM for an IP
○ Create veth and routes on host and/or fabric
○ Configure device with allocated IP
9. CNI
- Container can join multiple networks
- Network described by JSON config
- Plugin supports two commands
- Add container to the network
- Remove container from the network
11. CNI: Step 1
Container runtime creates network namespace
and gives it a named handle
$ cd /var/lib/cni
$ touch myns
$ unshare -n mount --bind /proc/self/ns/net myns
13. CNI: Step 3
Inside the bridge plugin (1):
$ brctl addbr mynet
$ ip link add veth123 type veth peer name $CNI_IFNAME
$ brctl addif mynet veth123
$ ip link set $CNI_IFNAME netns $CNI_IFNAME
$ ip link set veth123 up
15. CNI: Step 3
Inside the bridge plugin (3):
# switch to container namespace
$ ip addr add 10.0.5.9/16 dev $CNI_IFNAME
# Finally, print IPAM result JSON to stdout
16. Kubernetes + CNI + Docker
- Kubernetes has its own network plugins
- CNI "driver" is a k8s network plugin
- Future: make CNI native plugin system
17. Kubernetes + CNI + Docker
- k8s starts "pause" container to create netns
- k8s invokes its plugin (CNI driver)
- k8s CNI driver executes a CNI plugin
- CNI plugin joins "pause" container to network
- Pod containers use "pause" container netns
18. Kubernetes + rkt
- rkt natively supports CNI
- Kubernetes delegates to rkt to invoke CNI
plugins