Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Demystifying Docker & Kubernetes
Networking
• Senior Devops Engineer at Onmobile Global
• Part of Bangalore Docker Community
• One of the Contributors in Docker Labs
• Have travelled extensively from work.
• A Linux, Docker Enthusiast, Avid Runner and Cyclist
LinkedIn profile https://in.linkedin.com/in/balasundaram-natarajan-43471115
Who am I?
•Overview of Container Networking Standards
•Docker CNM-Container Networking Model Dive
•Kubernetes CNI- Container Networking Interface Dive
Agenda
Container Standards
Container Standards
Many different standards
Putting all Container Standards together
Allows users to build container
images with any tool they choose.
Different tools are good for
different use cases.
OCI compliant runtimes can
consume the config.json and
root filesystem, and tell the
kernel to create a container.
OCI compliant runtimes can
consume the config.json and
root filesystem, and tell the
kernel to create a container.
The container engine is
responsible for creating the
config.json file and unpacking
images into a root file system.
Container in comparison with OSI
Container Building Blocks
Namespace
• Linux provides seven different namespaces
(Cgroup, IPC, Network, Mount, PID, User and UTS).
• Network namespaces (CLONE_NEWNET) determine the network resources that are
available to a process,
• Each network namespace has its own network devices, IP addresses, IP routing
tables, /proc/net directory, port numbers, and so on.
cgroups:
• blkio, cpu, cpuacct, cpuset, devices, hugetlb, memory,
• net_cls,net_prio, pids, Freezer,Perf_events,ns
• xt_cgroup(cgroupv2)
Container Building Blocks
In cgroups v1, you could assign threads of the same process to different cgroups.But in Cgroup v2, this is not
possible. Rhel8 by default comes up with cgroupv2.
Note: Kernel version has to be 4.5 and above
Container Networking
High Level Abstractions
CTR 1 CTR2
Container Network Model
CNM VS CNI
Note: https://kubernetes.io/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/
Containers and the CNM
Endpoint Sandbox Network Container
Container C1 Container C2 Container C3
Network A Network B
CNM Driver Interfaces
Docker Default Network Drivers
Null/None Network
Default Bridge Network(docker0)
Docker host
bridgenet1
Cntnr 1 Cntnr 2 Cntnr 3
Docker host
bridgenet2
Cntnr 4 Cntnr 5
bridgenet3
Cntnr 7Cntnr 6
docker network create -d bridge --name bridgenet1
Docker Bridge Networking and Port Mapping
Docker host 1
Bridge
Cntnr1
10.0.0.8
L2/L3 physical network
:80
:8080172.14.3.55
$ docker container run -p 8080:80 ...
Host port Container port
Custom Bridge Network
Host Network
DEMO
https://labs.play-with-docker.com
Demystfying container-networking
Typical On-Premise Deployment
Macvlan Network
Ipvlan Mode L2
Ipvlan Mode L3
Overlay Mode
The overlay driver enables simple and secure multi-host networking
Overlay Mode
What is Service Discovery
The ability to discover services within a Swarm
• Every service registers its name with the Swarm
• Every task registers its name with the Swarm
• Clients can lookup service names
• Service discovery uses the DNS resolver embedded inside each
container and the DNS server inside of each Docker Engine
Service Discovery Big Picture
“mynet” network (overlay)
Docker host 1
task1.myservice task2.myservice
Docker host 2
task3.myservice
task1.myservice 10.0.1.19
task2.myservice 10.0.1.20
task3.myservice 10.0.1.21
myservice 10.0.1.18
Swarm DNS (service discovery)
32
Service Virtual IP (VIP) Load Balancing
• Every service gets a VIP when it’s created
• This stays with the service for its entire life
• Lookups against the VIP get load-balanced across all healthy
tasks in the service
• Behind the scenes it uses Linux kernel IPVS to perform transport
layer load balancing
• docker service inspect <service> (shows the service VIP)
NAME HEALTHY IP
myservice 10.0.1.18
task1.myservice Y 10.0.1.19
task2.myservice Y 10.0.1.20
task3.myservice Y 10.0.1.21
task4.myservice Y 10.0.1.22
task5.myservice Y 10.0.1.23
Service
VIP
Load balance
group
33
What is the Routing Mesh
Native load balancing of requests coming from an external source
• Services get published on a single port across the entire Swarm
• Incoming traffic to the published port can be handled by all Swarm
nodes
• A special overlay network called “Ingress” is used to forward the
requests to a task in the service
• Traffic is internally load balanced as per normal service VIP load
balancing
Routing Mesh Example
Docker host 2
task2.myservice
Docker host 1
task1.myservice
Docker host 3
IPVS IPVS IPVS
Ingress network
8080 8080 8080
“mynet” overlay network
LB
1. Three Docker hosts
2. New service with 2 tasks
3. Connected to the mynet overlay
network
4. Service published on port 8080
swarm-wide
5. External LB sends request to Docker
host 3 on port 8080
6. Routing mesh forwards the request to
a healthy task using the ingress network
Node
Node
Node
Node
Node
Node
Swarm Topology
Node
Node
Node
Node
Node
Node
Manager
Worker
● Each Node has a role
● Roles are dynamic
● Programmable Topology
Swarm Topology: High Availability
Swarm
Manager
Swarm
Manager
Swarm Topology: High Availability
Swarm
Manager
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Leader FollowerFollower
Swarm
Manager
Swarm
Manager
Swarm Topology: High Availability
Swarm
Manager
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Leader FollowerFollower
Swarm
Manager
Swarm
Manager
Swarm Topology: High Availability
Swarm
Manager
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Follower FollowerLeader
Swarm
Manager
Swarm
Manager
Swarm Topology: High Availability
Swarm
Manager
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Swarm
Worker
Follower FollowerLeader
Services  Tasks
• Services provide a piece of functionality
• Based on a Docker image
• Replicated Services and Global Services
• Tasks are the containers that actually do the work
• A service has 1-n tasks
How service deployment works
$ docker service create declares
the service name, network, image:tag
and scale
Managers break down service into
tasks, schedules them and
workers execute tasks
Engines check to see what is running
and compared to what was declared
to “true up” the environment
Declare
ScheduleReconcile
Engine
Engine
Engine
Engine
Engine Engine
Services
$ docker service create --replicas 3 --name frontend --network
mynet
--publish 80:80/tcp frontend_image:latest
mynet
Engine
Engine
Engine
Engine
Engine Engine
Services
$ docker service create --replicas 3 --name frontend --network
mynet --publish 80:80/tcp frontend_image:latest
$ docker service create --name redis --network mynet redis:latest
mynet
Engine
Engine
Engine
Engine
Engine Engine
Node Failure
$ docker service create --replicas 3 --name frontend --network
mynet --publish 80:80/tcp frontend_image:latest
$ docker service create --name redis --network mynet redis:latest
mynet
Engine
Engine
Engine
Engine
Engine
Desired State ≠ Actual State
$ docker service create --replicas 3 --name frontend --network
mynet --publish 80:80/tcp frontend_image:latest
$ docker service create --name redis --network mynet redis:latest
mynet
Engine
Engine
Engine
Engine
Engine
Converge Back to Desired State
$ docker service create --replicas 3 --name frontend --network
mynet --publish 80:80/tcp frontend_image:latest
$ docker service create --name redis --network mynet redis:latest
mynet
Container Network Interface
Kubernetes At a High Level
Kubernetes Fundamentals
Kubernetes Fundamentals
Kubernetes Networking Fundamentals
Kubernetes Networking Fundamentals
Network Landscape in Kubernetes
CNI
• The container runtime must create a new network namespace for the container before invoking any plugins.
• The runtime must then determine which networks this container should belong to, and for each network, which plugins must
be executed.
• The network configuration is in JSON format and can easily be stored in a file. The network configuration includes mandatory
fields such as "name" and "type" as well as plugin (type) specific ones. The network configuration allows for fields to change
values between invocations. For this purpose there is an optional field "args" which must contain the varying information.
• The container runtime must add the container to each network by executing the corresponding plugins for each network
sequentially.
• Upon completion of the container lifecycle, the runtime must execute the plugins in reverse order (relative to the order in
which they were executed to add the container) to disconnect the container from the networks.
• The container runtime must not invoke parallel operations for the same container, but is allowed to invoke parallel operations
for different containers.
• The container runtime must order ADD and DEL operations for a container, such that ADD is always eventually followed by a
corresponding DEL. DEL may be followed by additional DELs but plugins should handle multiple DELs permissively (i.e. plugin
DEL should be idempotent).
• A container must be uniquely identified by a ContainerID. Plugins that store state should do so using a primary key of
(network name, CNI_CONTAINERID, CNI_IFNAME).
• A runtime must not call ADD twice (without a corresponding DEL) for the same (network name, container id, name of the
interface inside the container). This implies that a given container ID may be added to a specific network more than once only
if each addition is done with a different interface name.
Points to consider on implementing CNI
How CNI Works
How CNI Works
Demystfying container-networking
Demystfying container-networking
Demystfying container-networking
Kube-Proxy
Alternatives to Kube-proxy
Kubernetes Networking Model
Given the above constraints , below problems to be solved in Kubernetes Networking
Container to container networking
Pod Networking
Pod to Pod Networking
Pod to Pod Networking same node
Pod to Pod Networking different node
Overlay approach
Service
Kubernetes service concept
Kubernetes service concept
Pod to Service Networking
Service to Pod Networking
Service Networking Options
Nodeport
Load Balancer
Demystfying container-networking
Ingress Layer7 Load balancing
Demystfying container-networking
DENY all traffic to an application
LIMIT traffic to an application
DENY all non-whitelisted traffic in a namespace
DENY all traffic from other namespaces
ALLOW traffic from other namespaces
ALLOW traffic from external clients
Demystfying container-networking
Demystfying container-networking
Multi networking pods
3rd Party CNI Plugins
• Calico provides high scalability on distributed architectures such as Kubernetes, Docker, and OpenStack.
• Cilium provides network connectivity and load balancing between application workloads, such as application containers and processes, and ensures transparent security.
• Contiv integrates containers, virtualization, and physical servers based on the container network using a single networking fabric.
• Contrail provides overlay networking for multi-cloud and hybrid cloud through network policy enforcement.
• Flannel makes it easier for developers to configure a Layer 3 network fabric for Kubernetes.
• Multus supports multiple network interfaces in a single pod on Kubernetes for SRIOV, SRIOV-DPDK, OVS-DPDK, and VPP workloads.
• Open vSwitch (OVS) offers a production-grade CNI platform with a standard management interface on OpenShift and OpenStack.
• ovn-kubernetes - an container network plugin built on Open vSwitch (OVS) and Open Virtual Networking (OVN) with support for both Linux and Windows
• Romana makes cloud network functions less expensive to build, easier to operate, and better performing than traditional cloud networks.
• Juniper Contrail / TungstenFabric - Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking, simultaneous overlay-underlay support, network policy
enforcement, network isolation, service chaining and flexible load balancing
• CNI-Genie - generic CNI network plugin
• Nuage CNI - Nuage Networks SDN plugin for network policy kubernetes support
• Silk - a CNI plugin designed for Cloud Foundry
• Linen - a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment
• Vhostuser - a Dataplane network plugin - Supports OVS-DPDK & VPP
• Amazon ECS CNI Plugins - a collection of CNI Plugins to configure containers with Amazon EC2 elastic network interfaces (ENIs)
• Bonding CNI - a Link aggregating plugin to address failover and high availability network
• Terway - a collection of CNI Plugins based on alibaba cloud VPC/ECS network product
• Knitter - a CNI plugin supporting multiple networking for Kubernetes
• DANM - a CNI-compliant networking solution for TelCo workloads running on Kubernetes
• VMware NSX – a CNI plugin that enables automated NSX L2/L3 networking and L4/L7 Load Balancing; network isolation at the pod, node, and cluster level; and zero-trust security
policy for your Kubernetes cluster.
• SR-IOV CNI plugin-for discovering and advertising SRIOV network virtual functions (VFs) in a Kubernetes host.
Reference
• https://github.com/collabnix/dockerlabs
• https://docs.docker.com
• http://www.collabnix.com
• https://kubernetes.io/docs/concepts/cluster-administration/networking/
• https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/
• https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking
• https://success.docker.com/article/docker-ee-best-practices#astandarddeploymentarchitecture
• https://success.docker.com/article/networking
• https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/
• https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for-
security-people-ba92dd4c809d
• https://sreeninet.wordpress.com/2016/05/29/docker-macvlan-and-ipvlan-network-plugins/
Thank You

More Related Content

Demystfying container-networking

  • 1. Demystifying Docker & Kubernetes Networking
  • 2. • Senior Devops Engineer at Onmobile Global • Part of Bangalore Docker Community • One of the Contributors in Docker Labs • Have travelled extensively from work. • A Linux, Docker Enthusiast, Avid Runner and Cyclist LinkedIn profile https://in.linkedin.com/in/balasundaram-natarajan-43471115 Who am I?
  • 3. •Overview of Container Networking Standards •Docker CNM-Container Networking Model Dive •Kubernetes CNI- Container Networking Interface Dive Agenda
  • 6. Putting all Container Standards together Allows users to build container images with any tool they choose. Different tools are good for different use cases. OCI compliant runtimes can consume the config.json and root filesystem, and tell the kernel to create a container. OCI compliant runtimes can consume the config.json and root filesystem, and tell the kernel to create a container. The container engine is responsible for creating the config.json file and unpacking images into a root file system.
  • 8. Container Building Blocks Namespace • Linux provides seven different namespaces (Cgroup, IPC, Network, Mount, PID, User and UTS). • Network namespaces (CLONE_NEWNET) determine the network resources that are available to a process, • Each network namespace has its own network devices, IP addresses, IP routing tables, /proc/net directory, port numbers, and so on. cgroups: • blkio, cpu, cpuacct, cpuset, devices, hugetlb, memory, • net_cls,net_prio, pids, Freezer,Perf_events,ns • xt_cgroup(cgroupv2)
  • 9. Container Building Blocks In cgroups v1, you could assign threads of the same process to different cgroups.But in Cgroup v2, this is not possible. Rhel8 by default comes up with cgroupv2. Note: Kernel version has to be 4.5 and above
  • 13. CNM VS CNI Note: https://kubernetes.io/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/
  • 14. Containers and the CNM Endpoint Sandbox Network Container Container C1 Container C2 Container C3 Network A Network B
  • 18. Default Bridge Network(docker0) Docker host bridgenet1 Cntnr 1 Cntnr 2 Cntnr 3 Docker host bridgenet2 Cntnr 4 Cntnr 5 bridgenet3 Cntnr 7Cntnr 6 docker network create -d bridge --name bridgenet1
  • 19. Docker Bridge Networking and Port Mapping Docker host 1 Bridge Cntnr1 10.0.0.8 L2/L3 physical network :80 :8080172.14.3.55 $ docker container run -p 8080:80 ... Host port Container port
  • 28. Overlay Mode The overlay driver enables simple and secure multi-host networking
  • 30. What is Service Discovery The ability to discover services within a Swarm • Every service registers its name with the Swarm • Every task registers its name with the Swarm • Clients can lookup service names • Service discovery uses the DNS resolver embedded inside each container and the DNS server inside of each Docker Engine
  • 31. Service Discovery Big Picture “mynet” network (overlay) Docker host 1 task1.myservice task2.myservice Docker host 2 task3.myservice task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 Swarm DNS (service discovery)
  • 32. 32 Service Virtual IP (VIP) Load Balancing • Every service gets a VIP when it’s created • This stays with the service for its entire life • Lookups against the VIP get load-balanced across all healthy tasks in the service • Behind the scenes it uses Linux kernel IPVS to perform transport layer load balancing • docker service inspect <service> (shows the service VIP) NAME HEALTHY IP myservice 10.0.1.18 task1.myservice Y 10.0.1.19 task2.myservice Y 10.0.1.20 task3.myservice Y 10.0.1.21 task4.myservice Y 10.0.1.22 task5.myservice Y 10.0.1.23 Service VIP Load balance group
  • 33. 33 What is the Routing Mesh Native load balancing of requests coming from an external source • Services get published on a single port across the entire Swarm • Incoming traffic to the published port can be handled by all Swarm nodes • A special overlay network called “Ingress” is used to forward the requests to a task in the service • Traffic is internally load balanced as per normal service VIP load balancing
  • 34. Routing Mesh Example Docker host 2 task2.myservice Docker host 1 task1.myservice Docker host 3 IPVS IPVS IPVS Ingress network 8080 8080 8080 “mynet” overlay network LB 1. Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network
  • 35. Node Node Node Node Node Node Swarm Topology Node Node Node Node Node Node Manager Worker ● Each Node has a role ● Roles are dynamic ● Programmable Topology
  • 36. Swarm Topology: High Availability
  • 37. Swarm Manager Swarm Manager Swarm Topology: High Availability Swarm Manager Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Leader FollowerFollower
  • 38. Swarm Manager Swarm Manager Swarm Topology: High Availability Swarm Manager Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Leader FollowerFollower
  • 39. Swarm Manager Swarm Manager Swarm Topology: High Availability Swarm Manager Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Follower FollowerLeader
  • 40. Swarm Manager Swarm Manager Swarm Topology: High Availability Swarm Manager Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Swarm Worker Follower FollowerLeader
  • 41. Services Tasks • Services provide a piece of functionality • Based on a Docker image • Replicated Services and Global Services • Tasks are the containers that actually do the work • A service has 1-n tasks
  • 42. How service deployment works $ docker service create declares the service name, network, image:tag and scale Managers break down service into tasks, schedules them and workers execute tasks Engines check to see what is running and compared to what was declared to “true up” the environment Declare ScheduleReconcile
  • 43. Engine Engine Engine Engine Engine Engine Services $ docker service create --replicas 3 --name frontend --network mynet --publish 80:80/tcp frontend_image:latest mynet
  • 44. Engine Engine Engine Engine Engine Engine Services $ docker service create --replicas 3 --name frontend --network mynet --publish 80:80/tcp frontend_image:latest $ docker service create --name redis --network mynet redis:latest mynet
  • 45. Engine Engine Engine Engine Engine Engine Node Failure $ docker service create --replicas 3 --name frontend --network mynet --publish 80:80/tcp frontend_image:latest $ docker service create --name redis --network mynet redis:latest mynet
  • 46. Engine Engine Engine Engine Engine Desired State ≠ Actual State $ docker service create --replicas 3 --name frontend --network mynet --publish 80:80/tcp frontend_image:latest $ docker service create --name redis --network mynet redis:latest mynet
  • 47. Engine Engine Engine Engine Engine Converge Back to Desired State $ docker service create --replicas 3 --name frontend --network mynet --publish 80:80/tcp frontend_image:latest $ docker service create --name redis --network mynet redis:latest mynet
  • 49. Kubernetes At a High Level
  • 54. Network Landscape in Kubernetes
  • 55. CNI
  • 56. • The container runtime must create a new network namespace for the container before invoking any plugins. • The runtime must then determine which networks this container should belong to, and for each network, which plugins must be executed. • The network configuration is in JSON format and can easily be stored in a file. The network configuration includes mandatory fields such as "name" and "type" as well as plugin (type) specific ones. The network configuration allows for fields to change values between invocations. For this purpose there is an optional field "args" which must contain the varying information. • The container runtime must add the container to each network by executing the corresponding plugins for each network sequentially. • Upon completion of the container lifecycle, the runtime must execute the plugins in reverse order (relative to the order in which they were executed to add the container) to disconnect the container from the networks. • The container runtime must not invoke parallel operations for the same container, but is allowed to invoke parallel operations for different containers. • The container runtime must order ADD and DEL operations for a container, such that ADD is always eventually followed by a corresponding DEL. DEL may be followed by additional DELs but plugins should handle multiple DELs permissively (i.e. plugin DEL should be idempotent). • A container must be uniquely identified by a ContainerID. Plugins that store state should do so using a primary key of (network name, CNI_CONTAINERID, CNI_IFNAME). • A runtime must not call ADD twice (without a corresponding DEL) for the same (network name, container id, name of the interface inside the container). This implies that a given container ID may be added to a specific network more than once only if each addition is done with a different interface name. Points to consider on implementing CNI
  • 63. Kubernetes Networking Model Given the above constraints , below problems to be solved in Kubernetes Networking
  • 66. Pod to Pod Networking
  • 67. Pod to Pod Networking same node
  • 68. Pod to Pod Networking different node
  • 73. Pod to Service Networking
  • 74. Service to Pod Networking
  • 79. Ingress Layer7 Load balancing
  • 81. DENY all traffic to an application LIMIT traffic to an application
  • 82. DENY all non-whitelisted traffic in a namespace DENY all traffic from other namespaces
  • 83. ALLOW traffic from other namespaces ALLOW traffic from external clients
  • 87. 3rd Party CNI Plugins • Calico provides high scalability on distributed architectures such as Kubernetes, Docker, and OpenStack. • Cilium provides network connectivity and load balancing between application workloads, such as application containers and processes, and ensures transparent security. • Contiv integrates containers, virtualization, and physical servers based on the container network using a single networking fabric. • Contrail provides overlay networking for multi-cloud and hybrid cloud through network policy enforcement. • Flannel makes it easier for developers to configure a Layer 3 network fabric for Kubernetes. • Multus supports multiple network interfaces in a single pod on Kubernetes for SRIOV, SRIOV-DPDK, OVS-DPDK, and VPP workloads. • Open vSwitch (OVS) offers a production-grade CNI platform with a standard management interface on OpenShift and OpenStack. • ovn-kubernetes - an container network plugin built on Open vSwitch (OVS) and Open Virtual Networking (OVN) with support for both Linux and Windows • Romana makes cloud network functions less expensive to build, easier to operate, and better performing than traditional cloud networks. • Juniper Contrail / TungstenFabric - Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking, simultaneous overlay-underlay support, network policy enforcement, network isolation, service chaining and flexible load balancing • CNI-Genie - generic CNI network plugin • Nuage CNI - Nuage Networks SDN plugin for network policy kubernetes support • Silk - a CNI plugin designed for Cloud Foundry • Linen - a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment • Vhostuser - a Dataplane network plugin - Supports OVS-DPDK & VPP • Amazon ECS CNI Plugins - a collection of CNI Plugins to configure containers with Amazon EC2 elastic network interfaces (ENIs) • Bonding CNI - a Link aggregating plugin to address failover and high availability network • Terway - a collection of CNI Plugins based on alibaba cloud VPC/ECS network product • Knitter - a CNI plugin supporting multiple networking for Kubernetes • DANM - a CNI-compliant networking solution for TelCo workloads running on Kubernetes • VMware NSX – a CNI plugin that enables automated NSX L2/L3 networking and L4/L7 Load Balancing; network isolation at the pod, node, and cluster level; and zero-trust security policy for your Kubernetes cluster. • SR-IOV CNI plugin-for discovering and advertising SRIOV network virtual functions (VFs) in a Kubernetes host.
  • 88. Reference • https://github.com/collabnix/dockerlabs • https://docs.docker.com • http://www.collabnix.com • https://kubernetes.io/docs/concepts/cluster-administration/networking/ • https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/ • https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking • https://success.docker.com/article/docker-ee-best-practices#astandarddeploymentarchitecture • https://success.docker.com/article/networking • https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/ • https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for- security-people-ba92dd4c809d • https://sreeninet.wordpress.com/2016/05/29/docker-macvlan-and-ipvlan-network-plugins/