Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
01. Kubernetes-PPT.pptx
What is Kubernetes?
• Kubernetes is an orchestration engine and open-source platform for managing
containerized applications.
• Responsibilities include container deployment, scaling & descaling of containers &
container load balancing.
• Actually, Kubernetes is not a replacement for Docker, But Kubernetes can be considered
as a replacement for Docker Swarm, Kubernetes is significantly more complex than
Swarm, and requires more work to deploy.
• Born in Google ,written in Go/Golang. Donated to CNCF(Cloud native computing
foundation) in 2014.
• Kubernetes v1.0 was released on July 21, 2015.
• Current stable release v1.18.0.
Kubernetes Features
The features of Kubernetes, are as follows:
•Automated Scheduling: Kubernetes provides advanced scheduler to launch container on cluster nodes based on their resource
requirements and other constraints, while not sacrificing availability.
•Self Healing Capabilities: Kubernetes allows to replaces and reschedules containers when nodes die. It also kills containers that don’t
respond to user-defined health check and doesn’t advertise them to clients until they are ready to serve.
•Automated rollouts & rollback: Kubernetes rolls out changes to the application or its configuration while monitoring application health
to ensure it doesn’t kill all your instances at the same time. If something goes wrong, with Kubernetes you can rollback the change.
•Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale down the application as per the requirements with a simple
command, using a UI, or automatically based on CPU usage.
Kubernetes Features
5. Service Discovery & Load balancing
With Kubernetes, there is no need to worry about networking and communication because Kubernetes will automatically assign IP addresses to
containers and a single DNS name for a set of containers, that can load-balance traffic inside the cluster.
Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing.
6. Storage Orchestration
With Kubernetes, you can mount the storage system of your choice. You can either opt for local storage, or choose a public cloud provider such as
GCP or AWS, or perhaps use a shared network storage system such as NFS, iSCSI, etc.
Container orchestration
• Container orchestration automates the deployment,management, scaling, and networking of
containers across the cluster
. It is focused on managing the life cycle of containers.
• Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and
hosts can benefit from container orchestration.
• Container orchestration is used to automate the following tasks at scale:
 Configuring and scheduling of containers
 Provisioning and deployment of containers
 Redundancy and availability of containers
 Scaling up or removing containers tospread application load evenly across host
infrastructure
 Movement ofcontainers from one host to another if there is a shortage of
resources in a host, or if a host dies
 Allocation of resources between containers
 External exposure of services running in a container with the outside world
 Load balancing of service discovery between containers
 Health monitoring of containers and hosts
Swarm vs Kubernetes
Features Kubernetes Docker Swarm
Installation & Cluster Configuration
Installation is complicated; but once
setup, the cluster is very strong
Installation is very simple; but cluster
is not very strong
GUI GUI is the Kubernetes Dashboard There is no GUI
Scalability Highly scalable & scales fast
Highly scalable & scales 5x faster
than Kubernetes
Auto-Scaling Kubernetes can do auto-scaling
Docker Swarm cannot do auto-
scaling
Rolling Updates & Rollbacks
Can deploy Rolling updates & does
automatic Rollbacks
Can deploy Rolling updates, but not
automatic Rollbacks
Data Volumes
Can share storage volumes only with
other containers in same Pod
Can share storage volumes with any
other container
Logging & Monitoring
In-built tools for logging &
monitoring
3rd party tools like ELK should be
used for logging & monitoring
Both Kubernetes and Docker Swarm are important tools that are used to deploy containers
inside a cluster but there are subtle differences between the both
Kubernetes
• Kubernetes also known as K8s, is an open-source Container Management tool
• It provides a container runtime, container orchestration, container-centric
infrastructure orchestration, self-healing mechanisms, service discovery, load balancing
and container (de)scaling.
• Initially developed by Google, for managing containerized applications in a clustered
environment but later donated to CNCF
• Written in Golang
• It is a platform designed to completely manage the life cycle of containerized
applications and services using methods that provide predictability, scalability, and high
availability.
Kubernetes
https://www.cncf.io/certification/software-conformance/
Certified Kubernetes Distributions
• Cloud Managed:EKS byAWS, AKS by Microsoft and GKE by google
• Self Managed: OpenShift by Redhat and Docker Enterprise
• Local dev/test: Micro K8s by Canonical, Minikube
• Vanilla Kubernetes: The core Kubernetes project(baremetal), Kubeadm
• Special builds: K3s by Rancher
, a light weight K8s distribution for Edge devices
Online Emulator: https://labs.play-with-k8s.com/
KubernetesCluster
AKubernetes cluster is a set of physical or virtual machines and other infrastructure resources
that are needed to run your containerized applications. Each machine in a Kubernetes cluster is
called a node.
There are two types of node in each Kubernetes cluster:
Master node(s): hosts the Kubernetes control plane components and manages the cluster
Worker node(s):runs your containerized applications
worker worker
Master
KubernetesArchitecture
ETCD
Scheduler Controller
Master
API Server
Kube-proxy Kubelet
Docker
Pods
Containers
Kubelet Kube-proxy
Docker
Pods
Containers
Worker 01 Worker 02
kubectl Web UI
KubernetesArchitecture
https://blog.alexellis.io/kubernetes-in-10-minutes/
KubernetesArchitecture
Kubernetes Master
• Master is responsible for managing the complete
cluster
.
• You can access master node via the CLI, GUI, orAPI
• Themaster watches over the nodes in the cluster and
is responsible for the actual orchestration of
containers on the worker nodes
• For achieving fault tolerance, there can be more than
one master node in the cluster
.
• It is the access point from which administrators and
other users interact with the cluster to manage the
scheduling and deployment ofcontainers.
• It has four components: ETCD,Scheduler
,Controller
andAPI Server
KubernetesArchitecture
Kubernetes Master
ETCD
• ETCD is a distributed reliable key-value store used by
Kubernetes to store all data used to manage the cluster
.
• When you have multiple nodes and multiple masters in
your cluster
, etcd stores all that information on all the
nodes in the cluster in a distributed manner
.
• ETCD is responsible for implementing locks within the
cluster to ensure thereareno conflicts between the
Masters
Scheduler
• The scheduler is responsible for distributing work or containers across multiple
nodes.
• It looks for newly createdcontainers and assigns them to Nodes.
KubernetesArchitecture
Kubernetes Master
API server manager
• Masters communicate with the rest of the cluster through
the kube-apiserver
, the main access point to the control
plane.
• It validates and executes user’s REST commands
• kube-apiserver also makes sure that configurations in etcd
match with configurations of containers deployed in the
cluster
.
Controller manager
• The controllers arethe brain behind orchestration.
• They are responsible for noticing and responding when nodes, containers or endpoints goes down.
Thecontrollers makes decisions to bring up new containers in such cases.
• The kube-controller-manager runs control loops that manage the state of the cluster by checking if
the required deployments, replicas, and nodes are running in the cluster
KubernetesArchitecture
Kubernetes Master
Kubectl
• kubectl is the command line utility using which we can
interact with k8s cluster
• UsesAPIs providedbyAPI server to interact.
• Also known as the kubecommand line tool or kubectl or
kubecontrol.
• Used to deploy and manage applications on a Kubernetes
• kubectl run nginx used to deploy an application on the cluster
.
• kubectl cluster-info used to view information about the cluster and the
• kubectl get nodes used to list all the nodes part of the cluster
.
KubernetesArchitecture
Kubernetes Worker
Kubelet
• Worker nodes have the kubelet agent that is responsible for
interacting with the master to provide health information of
the worker node
• T
ocarry out actions requested by themaster on the worker
nodes.
Kubeproxy
• The kube-proxy is responsible for ensuring network traffic is routed properly to internal and external
services as required and is based on the rules definedby network policies in kube-controller-manager
and other custom controllers.
Kubernetes
What is K3s?
• K3s is a fully compliant Kubernetes distribution with the following enhancements:
 Packagedas a single binary
 <100MB memory footprint
 SupportsARM and x86 architectures
 Lightweight storage backend based on sqlite3 as the default storage mechanism to replace
heavier ETCD server
 Docker is replaced in favour ofcontainerd runtime
 Inbuilt Ingress controller (Traefik)
Kubernetes
K3s Architecture
Kubernetes
K3s Setup using VirtualBox
• Use 3VMs(1 master and 2 workers).All VMs should have bridge network adapterenabled
• Createa host only networkingadapter(DHCP disabled) and connectall VMs to it.This is to have static IPs for
allVMs in thecluster
.Make sure static IPs areconfigured in eachVM in thesame subnet range ofhost only
network
On Master
• bash -c "curl -sfL https://get.k3s.io|sh -“
• TOKEN=cat /var/lib/rancher/k3s/server/node-token
• IP = IP of masternode whereAPI server is running
On Worker nodes
• bash -c "curl -sfL https://get.k3s.io|K3S_URL="https://$IP:6443" K3S_TOKEN="$TOKEN" sh -"
https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
Kubernetes Pods
Kubernetes
Pods
Pods areoften ephemeral
• Basic scheduling unit in Kubernetes.
• Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level
structure calleda pod
• It is also the smallest deployableunit that can be created, schedule, and managed on a Kubernetes
cluster
. Each pod is assigned a unique IP address within the cluster
.
• Pods can hold multiple containers as well, but you should limit yourself when possible. Because pods
are scaled up and down as a unit, all containers in a pod must scale together
, regardless of their
individual needs. This leads to wasted resources.
Ex:nginx,mysql,
wordpress..
10.244.0.22
containers
Kubernetes
10.244.0.22
Pods
• Any containers in the same pod will share the same storage volumes and network resources and
communicate using localhost
• K8s uses YAML to describe the desired state of the containers in a pod.This is also called a Pod Spec.
These objects are passed to the kubelet through theAPI server
.
• Pods are used as the unit of replication in Kubernetes. If your application becomes too popular and a
single pod instance can’t carry the load, Kubernetes can be configured to deploy new replicas of your
pod to the cluster as necessary.
containers
Insidecluster
Using theexamplefromtheabovefigure, you could run curl 10.1.0.1:3000 to
communicatetotheonecontainerand curl10.1.0.1:5000 tocommunicatetothe
othercontainerfromother pods.However, ifyou wanted totalkbetween
containers - forexample, callingthetopcontainerfromthebottomone,you could
usehttp://localhost:3000.
Kubernetes
Scaling Pods
• All containers within the pod getscaled together
.
• You cannot scale individual containers within the pods. Thepod is the unit of scale in K8s.
• Recommended way is to have only one container per pod. Multi container pods are very rare.
• In K8s, initcontaineris sometimes used as a second container inside pod.
initcontainers are exactlylike regular containers,exceptthat theyalways run to completion.Each initcontainermust completesuccessfullybeforethenextone starts.
If a Pod’s init containerfails,KubernetesrepeatedlyrestartsthePod untiltheinitcontainersucceeds
Kubernetes
Imperative vs Declarative commands
• KubernetesAPI defines a lot of objects/resources, such as namespaces, pods, deployments,
services, secrets, config maps etc.
• There are two basic ways to deploy objects in Kubernetes: Imperatively and Declaratively
Imperatively
• Involves using any of the verb-based commands like kubectl run, kubectl create,kubectl
expose,kubectl delete,kubectl scale and kubectl edit
• Suitable for testing and interactive experimentation
Declaratively
• Objects are written in YAML files and deployed using kubectl create or kubectl apply
• Best suited for production environments
Kubernetes
Manifest /Spec file
• K8s objectconfiguration files - Written in YAML or JSON
• They describe the desired state of your application in terms of KubernetesAPI objects.Afile
can include one or moreAPI object descriptions (manifests).
# manifest filetemplate
apiVersion - version of the KubernetesAPI
used to create the object
kind - kind of objectbeing created
metadata - Data that helps uniquely identify
the object,including a name and
optional namespace
spec - configuration that defines the desired for
the object
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
---
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
Multiple
resource
definitions
Kubernetes
Manifest files Man Pages
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
---
apiVersion: v1
kind:Pod
metadata:
name:…
spec:
containers:
- name:…
Multiple
resource
definitions
List all K8s API supported Objects and Versions
kubectl api-resources
kubectl api-versions
Man pages for objects
kubectl explain <object>.<option>
kubectl explain pod
kubectl explain pod.apiVersion
kubectl explain pod.spec
Kubernetes
Once the cluster is setup…
kubectl version
kubectl get nodes – o wide
Kubernetes
Once the cluster is setup…
kubectl cluster-info
kubectlcluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state
Kubernetes
Creating Pods
kubectl run <pod-name> --image <image-name>
kubectl run nginx --image nginx --dry-run=client
kubectl run nginx --image nginx --dry-run=client –o yaml
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
dry-run doesn’trun the
commandbutwillshow what
the changes the command
woulddo tothe cluster
shows thecommandoutputin
YAML. Shortcuttocreatea
declarative yamlfrom
imperative commands
Kubernetes
Creating Pods: Imperative way
kubectl run test --image nginx --port 80 - Also exposes port 80 of container
kubectl getpods – o wide
kubectl describe pod test – display extended information of pod
container
10.244.2.2
Kubernetes
Creating Pods
curl <ip-of-pod>
container
10.244.2.2
Kubernetes
Creating Pods:Declarative way
• kubectl create –f pod-definition.yml
• kubectlapply –f pod-definition.yml – ifmanifest fileis
changed/updated after deployment and need to re-deploy
the pod again
• kubectl deletepod <pod-name>
# pod-definition.yml
apiVersion:v1
kind:Pod
metadata:
name:nginx-pod
labels:
app:webapp
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
Kubernetes
Pod Networking
Kubernetes
Replication Controller
• Asingle pod may not besufficient to handle the user traffic.Also if this only pod goes
down because of a failure, K8s will not bring this pod up again automatically
• In order to prevent this, we would like to have more than one instance or POD running at
the same time inside the cluster
• Kubernetes supports different controllers(Replicacontroller &ReplicaSet) to handle
multiple instances of a pod. Ex: 3 replicas of nginx webserver
• Replication Controller ensures high availability by replacing the unhealthy/deadpods with
a new one to ensure required replicas are always running inside a cluster
• So, does that mean you can’t use a replication controller if you plan to have a single POD?
No! Even if you have a single POD, the replication controller can help by automatically
bringing up a new POD when the existing one fails.
• Another reason we need replication controller is to create multiple PODs to share the load
across them.
• Replica controller is deprecated and replaced by Replicaset
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3
ETCD
API Server
Scheduler Controller
Kubelet
When a command is given
through kubectl
Kubelet
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3
ETCD
API Server
Scheduler Controller
Kubelet
API Serverupdates the
deployment details in ETCD
Kubelet
deployment
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3
ETCD
API Server
Scheduler Controller
Kubelet
Controller managerthrough
API Server identifies its
workload and createsa
ReplicaSet
Kubelet
deployment
replicaset
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
ReplicaSet createsrequired
numberof pods andupdates
the ETCD.
Note thestatus of pods.They
are still in PENDINGstate
Kubelet
deployment
replicaset
pod PENDING
ETCD
pod PENDING
pod PENDING
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Scheduler identifiesits workload
through API-Server and decides
the nodes onto which the podare
to bescheduled.
At this stage,pods areassigned
to a node
Kubelet
deployment
replicaset
pod slave01
ETCD
pod slave02
pod slave01
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Kubeletidentifiesits workload
through API-Server and
understands thatit needs to
deploy somepods on its node
Kubelet
deployment
replicaset
pod slave01
ETCD
pod slave02
pod slave01
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Kubeletinstructs the docker
daemon to createthe pods.At
the same timeit updates the
status as ‘Pods CREATING’ in
ETCD throughAPI Server
Kubelet
deployment
replicaset
pod CREATING
ETCD
pod CREATING
pod CREATING
Pod Pod
Pod
Kubernetes
Behind the scene…
K8s Master
K8s slave 01
K8s slave 02
$ kubectl run nginx –
image=nginx – replicas=3 API Server
Scheduler Controller
Kubelet
Once pods are created and run,
Kubelet updates the pod status
as RUNNING in ETCD through
API Server
Kubelet
deployment
replicaset
pod RUNNING
ETCD
pod RUNNING
pod RUNNING
Pod Pod
Pod
Kubernetes
Labels and Selectors
Labels
• Labels are key/value pairs that are attached to objects,
such as pods
• Labels allows to logically group certain objects by giving
various names to them
• You can label pods, services, deployments and
even nodes
kubectl get pods -l environment=production
kubectl getpods -l environment=production,
tier=frontend
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
Kubernetes
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
Labels and Selectors
• If labels are not mentioned while deploying k8s objects using imperative commands, the
label is auto set as app: <object-name>
kubectl run --imagenginx nginx
kubectl get pods --show-labels
Adding Labels
kubectl label pod nginx environment=dev
Kubernetes
Labels and Selectors
Selectors
• Selectors allows to filter the objects based on labels
• TheAPI currently supports two types of selectors: equality-based and set-based
• Alabel selector can be made of multiple requirements which are comma-separated
Equality-based Selector
• Equality- or inequality-based requirements allow
filtering by label keys and values.
• Threekinds of operators are admitted =,==,!=
Used by Replication Controllers and Services
Kubernetes
Labels and Selectors
Selectors
• Selectors allows to filter the objects based on labels
• TheAPI currently supports two types of selectors: equality-based and set-based
• Alabel selector can be made of multiple requirements which are comma-separated
Set-based Selector
• Set-based label requirements allow filtering
keys according to a set ofvalues.
• Threekinds of operators are supported: in,notin
and exists (only the key identifier).
kubectlget pods -l 'environment in (production, qa)'
Used by ReplicaSets, Deployments, DaemonSets
Kubernetes
ReplicaSet
• ReplicaSets are a higher-levelAPI that gives the ability to easily run multiple instances of a
given pod
• ReplicaSets ensures that the exact number of pods(replicas) are always running in the
cluster by replacing any failed pods with new ones
• Thereplica count is controlled by the replicas field in the resource definition file
• Replicaset uses set-based selectors whereas replicacontroller uses equality based selectors
Kubernetes
Pod vs ReplicaSet
apiVersion:apps/v1
kind:ReplicaSet
metadata:
name: nginx-replicaset
labels:
app:webapp
type:front-end
spec:
replicas: 3
selector:
matchLabels:
app:webapp
template:
metadata:
name: nginx-pod
labels:
app:webapp
spec:
containers:
- name: nginx-container
image:nginx
ports:
- containerPort:80
apiVersion:v1
kind:Pod
metadata:
name:nginx-pod
labels:
app:
webapp spec:
containers:
- name:nginx-container
image:nginx
ports:
- containerPort:80
pod.yml
replicaset.yml
Kubernetes
ReplicaSet Manifest file
Kubernetes apiVersion:apps/v1
kind:ReplicaSet
metadata:
name: nginx-replicaset
labels:
app:webapp
type:front-end
spec:
replicas: 3
selector:
matchLabels:
app:webapp
template:
metadata:
name: nginx-pod
labels:
app:webapp
spec:
containers:
- name: nginx-container
image:nginx
ports:
- containerPort:80
ReplicaSet
kubectl create – freplica-set.yml
kubectl get rs – o wide
kubectl get pods – o wide
Kubernetes
ReplicaSet
• kubectl edit replicaset <replicaset-name> - edit a replicaset; like image, replicas
• kubectl deletereplicaset <replicaset-name> - deletea replicaset; like image, replicas
• kubectl delete-freplica-set.yml
• kubectl get all - get pods, replicasets,deployments, services all in one shot
• kubectl replace -f replicaset-definition.yml -replaces the pods with updated
definition file
• kubectl scale -–replicas=6 –f replicaset-definition.yml – scale using definition file
• kubectl scale -–replicas=6 replicaset <replicaset-name> - using name of
replicaset
Kubernetes Pods
Kubernetes Deployments
Kubernetes
Deployment
• ADeployment provides declarative updates for Pods and ReplicaSets.
• You describe a desired state in a Deployment, and the Deployment Controller changes the actual
state to the desired state at a controlled rate.
• It seems similar to ReplicaSets but with advanced functions
• Deployment is the recommended way to deploy a pod or RS
• By default Kubernetes performs deployments in rolling update strategy.
• Below aresome of the key features of deployment:
 Easily deploy a RS
 Rolling updates pods
 Rollback to previous deployment versions
 Scale deployment
 Pause and resume deployment
Kubernetes
Deployment Strategy
• Whenever wecreate a new deployment,K8s triggers a Rollout.
• Rollout is the process of gradually deploying or upgrading your application containers.
• For every rollout/upgrade, a version history will be created, which helps in rolling back to working
version in case of an update failure
• In Kubernetes there are a few different ways to release updates to an application
• Recreate: terminate the old version and release the new one.Application experiences downtime.
• RollingUpdate: release a new version on a rolling update fashion, one after the other
. It’s the
default strategy in K8s. No application downtime is required.
• Blue/green: release a new version alongside the old version then switch traffic
spec:
replicas: 10
strategy:
type:RollingUpdate
rollingUpdate:
maxSurge:2
maxUnavailable:0
spec:
replicas: 10
strategy:
type:Recreate
Kubernetes
Rolling Update Strategy
• By default,deploymentensures that only 25% of your pods are unavailable during an update and does not
updatemore that 25%of thepods at a given time
• It does not kill old pods until/unless enough new pods comeup
• It does not create new pods until a sufficientnumberof old pods are killed
• Thereare two settings you can tweak to controltheprocess: maxUnavailableand maxSurge.Both havethe
defaultvalues set - 25%
• ThemaxUnavailablesetting specifies the maximum numberof pods that can beunavailableduring therollout
process.You can set it to an actualnumber(integer)or a percentageof desired pods
Let’ssay maxUnavailable is setto40%.When theupdatestarts,theoldReplicaSetis scaleddownto60%.
As soon as new podsarestartedand ready
,theoldReplicaSetis scaleddownagainand thenewReplicaSetis scaledup.This
happensinsuch a waythatthetotalnumber ofavailablepods(oldand new
,since wearescaling upand down) is alwaysatleast
60%.
• ThemaxSurge setting specifies the maximum numberof pods that can becreatedoverthe desired numberof
pods
If we usethesame percentage as before(40%), thenew ReplicaSetis scaledup right awaywhen therolloutstarts.Thenew
ReplicaSetwillbescaledupinsuch a waythatitdoesnot exceed140% ofdesiredpods.As oldpods getkilled,thenew ReplicaSet
scales upagain,making sure itnevergoes over the140% ofdesiredpods
Kubernetes
Deployments
• kubectl create deployment nginx --imagenginx --dry-run -o yaml
• kubectl create -f deployment.yml --record (--record is optional, it just
records theevents in thedeployment)
• kubectl get deployments
apiVersion:apps/v1
kind:Deployment
metadata:
name: nginx-deployment
labels:
app:nginx
spec:
replicas: 10
selector:
matchLabels:
app:nginx
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx-container
image:nginx
ports:
- containerPort:80
Kubernetes
Deployments
• kubectl describedeployment <deployment-name>
apiVersion:apps/v1
kind:Deployment
metadata:
name: nginx-deployment
labels:
app:nginx
spec:
replicas: 10
selector:
matchLabels:
app:nginx
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx-container
image:nginx
ports:
- containerPort:80
Kubernetes
Deployments
• kubectl get pods – o wide
• kubectl edit deployment <deployment -name> - perform live edit of
deployment
• kubectl scale deployment <deployment -name> --replicas2
• kubectl apply –f deployment.yml – redeploya modified yaml file;Ex:
replicas changed to 5, imageto nginx:1.18
apiVersion:apps/v1
kind:Deployment
metadata:
name: nginx-deployment
labels:
app:nginx
spec:
replicas: 10
selector:
matchLabels:
app:nginx
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx-container
image:nginx
ports:
- containerPort:80
Kubernetes
Deployments
• kubectl rollout status deployment <deployment -name>
• kubectl rollout history deployment <deployment -name>
Kubernetes
Deployments
• kubectl rollout undo deployment <deployment -name>
• kubectl rollout undo deployment <deployment -name> --to-revision=1
• kubectl rollout pause deployment <deployment -name>
• kubectl rollout resume deployment <deployment -name>
• kubectl delete -f <deployment-yaml-file> - deletesdeploymentand relateddependencies
• kubectl delete all --all – deletespods, replicasets, deployments and services in current namespace
Kubernetes
Namespaces
Namespaces are Kubernetes objects which partition a single Kubernetes cluster into multiple
virtual clusters
• Kubernetes clusters can managelargenumbers of unrelated
workloads concurrently and organizationsoften choose to deploy
projects createdby separateteamsto shared clusters.
• With multiple deployments in a single cluster
, thereare high
chancesof deleting deployments belong to deffprohjects.
• So namespacesallow you to group objects togetherso you can
filter andcontrolthemas a unit/group.
• Namespacesprovide a scope for names. Names of resources need
to beunique within a namespace, but not across namespaces.
• So each Kubernetes namespaceprovides the scope for Kubernetes
Names itcontains;which means that using the combinationof an
objectnameand a Namespace,each objectgets a unique identity
across the cluster
Kubernetes
Namespaces
By default, a Kubernetes cluster is created with the
following threenamespaces:
• default: It’s a default namespace for users. By default,
all the resource created in Kubernetes cluster are
created in the default namespace
• Kube-system: It is the Namespace for objects created
by Kubernetes systems/control plane.Any changes to
objects in this namespace would cause irreparable
damageto the cluster itself
• kube-public: Namespace for resources that are publicly
readable by all users. This namespace is generally
reserved for cluster usage like Configmaps and Secrets
Kubernetes
kubectl get all -n kube-system (lists available objects under a specific namespace)
Namespaces
kubectl getnamespaces
kubectl get all --all-namespaces (lists available objects under all available namespaces)
Kubernetes
Namespaces
Create a namespace
kubectl create ns dev # Namespace for Developer team
kubectl create ns qa # Namespace for QAteam
kubectl create ns production # Namespace for Production team
Deploy objects in a namespace
kubectl run nginx --image=nginx -n dev
kubectl get pod/nginx – n dev
kubectl apply --namespace=qa -f pod.yaml
Deletea namespace
kubectl deletens production
Kubernetes Services
Front
end
Back
end
Kubernetes
Services
• Services logically connect pods across the cluster to enable networking between them
• Thelifetime of an individual pod cannot be relied upon; everything from their IP addresses to
their very existence are prone to change.
• Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an
issue and dies, it’s Kubernetes’jobto replace it so that the application doesn’t experience any
downtime
• Services makes sure that even after a pod(back-end) dies because of a failure, the newly
created pods will be reached by its dependencypods(front-end) via services. In this case,
front-end applications always find the backend applications via a simple service(using service
name or IP address) irrespective of their location in the cluster
• Services point to pods directly using labels. Services do not point to deployments or
ReplicaSets. So, all pods with the same label gets attached to same service
• 3 types: ClusterIP
, NodePort and LoadBalancer
Kubernetes
Pods’lifecycle are erratic; they
comeand go by Kubernetes’will.
Not healthy? Killed.
Not in theright place?Cloned,
andkilled.
So how can you send a request to
your application if you can’t know
for sure whereit lives?
Theanswerlies in services.
Services are tied to the pods using
pod labels and provides a stable
end point for the users to reachthe
application.
When requesting your application,you don’t care about its location or about
which pod answers the request.
Services
• ClusterIP service is the default Kubernetes service.
• It gives you a service inside your cluster that other apps inside
your cluster can access
• It restricts access to the application within the cluster itself
and no external access
• Useful when a front-end app wants to communicate with
back-end
• Each ClusterIP service gets a unique IP address inside the
cluster
• Similar to --links in Docker
Kubernete
s Services
ClusterIP
Services point to pods directly using labels!!!
https://kubernetes.io/docs/concepts/services-networking/service/
Kubernetes
When services are not available
• Imagine 2 pods on 2
separate nodes node-1 &
node-2 with their local IP
address
• pod-nginx can ping and connect
to pod-python using its internal
IP 1.1.1.3.
Kubernetes
When services are not available
• Now let’s imagine the pod-python
dies and a new one is created.
• Now pod-nginx cannot reach pod-
python on 1.1.1.3 because its IP is
changed to 1.1.1.5.
How do we remove this dependency?
Kubernetes
Enter services…
• Services logically connects pods
together
• Unlike pods, a serviceis not scheduled
on a specific node. It spans across the
cluster
• Pod-nginx can always safely connect to
pod-python using serviceIP 1.1.10.1 or
the DNS name of service(service-
python)
• Even if the python pod gets deleted and
recreated again, nginx pod can still
reach python pod using the service but
not with IP of python pod directly
Kubernetes
Enter services…
• Multiple ClusterIP services
Kubernetes
ClusterIP
apiVersion:v1
kind:Service
metadata:
name:ingress-nginx
spec:
type:ClusterIP
ports:
- name:http
port:80
targetPort: 80
protocol: TCP
selector:
app:nginx-backend
apiVersion:v1
kind:Pod
metadata:
name:backend-pod
labels:
app: nginx-backend
spec:
containers:
- name:nginx-container
image:nginx
ports:
- containerPort:80
Nginx
Alpine
10.244.0.22
10.244.0.24
Pod
Pod
ingress-nginx
(ClusterIP)
10.20.0.18
80
clusterservice.yml pod.yml
80
targetPort
port
Kubernetes
Services
ClusterIP
apiVersion:v1
kind:Service
metadata:
name:ingress-nginx
spec:
type:ClusterIP
ports:
- name:http
port:80
targetPort: 80
protocol: TCP
selector:
app:nginx-backend
apiVersion:v1
kind:Pod
metadata:
name:backend-pod
labels:
app: nginx-backend
spec:
containers:
- name:nginx-container
image:nginx
ports:
- containerPort:80
clusterip-service.yml pod.yml
kubectl create – f
clusterservice.yml kubectl create –
f pod.yml
root@alpine:# curl ingress-nginx
check the endpoints: kubectl describe
svc/<svc-name>
Kubernetes
Services
ClusterIP
Kubernetes
Services
NodePort
• NodePort opens a specific port on all the Nodes in
the cluster and forwards any traffic that is received
on this port to internal services
• Useful when front end pods areto beexposedoutside
the cluster for users to access it
• NodePort is build on top of ClusterIP service by
exposing the ClusterIP service outside of the cluster
• NodePort must be within the port range30000-
32767
• If you don’t specify this port, a random port will be
assigned. It is recommended to let k8s auto assign
this port
Kubernetes
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort:30080
80
NodePort
Kubernetes
NodePort
Kubernetes
Multi Instances in same node
Kubernetes
Multi Instances across cluster
Kubernetes
NodePort
• Application can be reached from any of the available nodes in the cluster using
<node-ip>:<node-port>
Kubernetes
NodePort
apiVersion:v1
kind:Service
metadata:
name:nodeport-service
spec:
type:NodePort
ports:
- port:80
targetPort:80
nodePort:30001
protocol:TCP
selector:
app:nginx-frontend
apiVersion:v1
kind:Pod
metadata:
name:nginx-frontend
labels:
app:nginx-frontend
spec:
containers:
- name:nginx-container
image:nginx
ports:
- containerPort:80
nodeport-service.yml pod.yml
Nginx
10.244.1.66
Node 03
ingress-nginx
(NodePort)
10.105.32.217
80
80
targetPort
port
192.168.0.2
30001
NodePort
Master/Worker
kubectlcreate –f nodeportservice.yml
kubectl create– fpod.yml
Kubernetes
Demo: NodePort
kubectl create – f nodeport-
service.yml kubectl create – f pod.yml
kubectl get services
Kubernetes
Demo:NodePort
kubectl describeservice<service-name>
Kubernetes
Demo:NodePort
kubectl get nodes – o wide
192.168.0.107:30001
192.168.0.107:30001
Kubernetes
NodePort Limitations
• In NodePort service, users can access application using
the URL http://<node-ip>:<node-port>
• In Production environment, we do not want the users to
have to type in the IP address every timeto access the
application
• So we configure a DNS server to point to the IP of the
nodes.Users can now access the application using the
URL http://xyz.com:30001
• Now, we don’t want the user to have to remember port
number either
.
• However
, NodePort service can only allocate high
numbered ports which are greater than 30,000.
• So we deploy a proxy server between the DNS server
and the cluster thatproxies requests on port 80 to port
30001 on the nodes.
• We then point the DNS to proxy server’s IP
, and users
can now access the application by simply visiting
http://xyz.com
spec:
type:NodePort
ports:
- port:80
targetPort:5000
nodePort:30001
protocol:TCP
NodePort30001 is being used onlyfor demo.Youcan configurethis port
numberin servicemanifestfileor letK8s autoassignfor you.
Kubernetes
Services
Load Balancer
• ALoadBalancer service is the standard way to exposea
Kubernetes service to the internet
• On GKE(Google Kubernetes Engine), this will spin up a Network
Load Balancer that will give you a single IP address that will
forward all external traffic to your service
• All traffic on the port you specify will be forwarded to the service
• There is no filtering, no routing, etc.This means you can send
almost any kind of traffic to it, like HTTP
, TCP
, UDP or
WebSocket's
• Few limitations with LoadBalancer:
 Every service exposed will gets it's own IP address
 It gets very expensive to have external IP for each of the
service(application)
https://rancher.com/blog/2018/2018-06-08-load-balancing-user-apps-with-rancher/
Kubernetes
Services
Load Balancer
• On GoogleCloud,AWS, orAzure, a service type of
LoadBalancer in the service manifest file will immediately
run an Elastic /CloudLoad Balancer that assigns externally
IP (public IP) to your application
• But for on-prem or bare-metal k8s clusters,this
functionality is not available
• Using service type as LoadBalancer on bare-metal will not
assign any external IP and service resource will remain in
Pending state forever
https://collabnix.com/3-node-kubernetes-cluster-on-bare-metal-system-in-5-minutes/
spec:
type:LoadBalancer
selector:
app:hello
ports:
- port:80
targetPort:8080
protocol:TCP
Kubernetes
Services
Load Balancer
apiVersion:v1
kind: Service
metadata:
name:lb-service
labels:
app:hello
spec:
type:LoadBalancer
selector:
app:hello
ports:
- port:80
targetPort:80
protocol:TCP
loadbalancer-service.yml
External IP
10.12.16.22
GCP Load Balancer
Kubernetes
GCP LoadBalancer
www.flask-app.com
DNS Server
GoogleKubernetes Engine (GKE)
Few limitations with LoadBalancer
• Every service exposed will gets it's
own public IP address
• It gets very expensive to have public
IP for each of the service
www.flask-app.com
Kubernetes
GCP LoadBalancer Cons
Kubernetes Cluster
connected-city service connected-factoryservice connected-tools service
connected-city pods connected-factory pods connected-tools pods
www.connected-city.com
www.connected-factory.com
www.connected-tools.com
External IP
10.12.16.22
GCP Load Balancer
External IP
10.12.16.23
External IP
10.12.16.24
Public IP =
Kubernetes
LoadBalancer
• Application can bereached using the
external IP assigned by the LoadBalancer
• TheLoadBalancer will forward the traffic
to the available nodes in the cluster on
the nodePort assigned to the service
Kubernetes
GCP LoadBalancer
Cons
Every serviceexposed will gets it's own IP address
It gets very expensive to have externalIP for each of theservice(application)
Kubernetes
Cloud LoadBalancer: Cons
• Every service exposedwillgets it's own IP address
• It gets very expensiveto have external IP for each
of the service(application)
• Weseetwo LoadBalancers, each having its own IP
.
If we send a request to LoadBalancer 22.33.44.55
it gets redirectedto our internal service-nginx. If we
send the request to 77.66.55.44 it gets redirected
to our internal service-python.
• This works great! But IP addresses arerareand
LoadBalancer pricing depends on the cloud
providers. Now imaginewe don’t have just two but
many more internal services for which we would
like to create LoadBalancers, costs would scale up.
• Might there beanother solution which allows us to
only use one LoadBalancer (with one IP) but still
reach both of our internal services directly?.
Kubernetes
LoadBalancer Vs Ingress
• PublicIPs aren’t cheap
• ALB can only handlelimitedIPs
• So SSL termination
(Application Load Balancer)
• Ingress acts as internal LoadBalancer
• Routes traffic basedon URLpath
• All applications will need only one publicIP
Kubernetes
https://metallb.universe.tf/
Services
MetalLB Load Balancer
• MetalLB is a load-balancer
implementation for bare
metal Kubernetes clusters.
• It allows you to create Kubernetes
services of type “LoadBalancer” in bare-
metal/on-prem clusters that don’t run
on cloud providers likeAWS, GCP
,
Azureand DigitalOcean.
Kubernetes
Ingress Resource(rules)
• With cloud LoadBalancers, we need to pay for each of the
service that is exposed using LoadBalancer as the service type.
As services grow in number
, complexity to manage SSLs,
Scaling,Auth etc., also increase
• Ingress allows us to manage all of the above within the
Kubernetes cluster with a definition file, that lives along with
the rest of your application deployment files
• Ingress controller can perform load balancing,Auth, SSL and
URL/Path based routing configurations by being inside the
cluster living as a Deployment or a DaemonSet
• Ingress helps users access the application using a single
externally accessible URL,that you can configure to route to
different services within your cluster based on the URL path, at
the same timeterminate SSL/TLS
www.smartfactory.com
www.smartcity
.com
service1
service2
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Kubernetes
Why SSL Termination atLoadBalancer?
• SSL termination/offloading represents the end or termination point of an SSL connection
• SSL termination at LoadBalancer decrypts and verifies data on the load balancer instead of
the application server
. Unencrypted traffic is sent between the load balancer and the backend
servers
• It is desired because decryption is resource and CPU intensive
• Putting the decryption burden on the load balancer enables the server to spend processing
power on application tasks, which helps improve performance
• It also simplifies the management of SSL certificates
Kubernetes
Ingress Controller
• Ingress resources cannot do anything on their own. We need to have an Ingress controller in
order for the Ingress resources to work
• Ingress controller implements rules defined by ingress resources
• Ingress controllers doesn’t come with standard Kubernetes binary, they have to bedeployed
separately
• Kubernetes currently supports and maintains GCE and nginx ingress controllers
• Other popular controllers include T
raefik, HAProxy ingress, istio,Ambassador etc.,
• Ingress controllers are to be exposed outside the cluster using NodePort or with a Cloud Native
LoadBalancer.
• Ingress is the most useful if you want to expose multiple services under the same IP address
• Ingress controller can perform load balancing,Auth, SSL and URL/Path based routing
configurations by being inside the cluster living as a Deployment or a DaemonSet
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Kubernetes
30001
connected-city
.com connected-factory
.com others
GCE LB
NGINX
TRAEFIK
CONTOUR
ISTIO
Ingress Controller
Kubernetes Cluster Ingress NodePort/ Cloud LB
Ingress Controller
connected-city service connected-factoryservice default service
connected-city pods connected-factory pods default pods
www.connected-city.com
www.connected-factory.com
www.connected-tools.com
custom404
pages
Ingress
Rules
Kubernetes
Nginx Ingress Controller
• Ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer
• Officially maintained by Kubernetes community
• Routes requests to services based on the request host or path, centralizing a number of
services into a single entrypoint.
Ex:www.mysite.com or www.mysite.com/stats
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-
0.32.0/deploy/static/provider/baremetal/deploy.yaml
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#bare-metal
Deploy Nginx Ingress Controller
www.smartfactory.com
www.smartcity
.com
service1
service2
Kubernetes
Ingress Rules
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name:ingress-rules
spec:
rules:
- host:
http:
paths:
-path: /nginx
backend:
serviceName: nginx-service
servicePort: 80
-path: /flask
backend:
serviceName:flask-service
servicePort: 80
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name:ingress-rules
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
-host:nginx-app.com
http:
paths:
- backend:
serviceName:nginx-service
servicePort: 80
-host:flask-app.com
http:
paths:
- backend:
serviceName:flask-service
servicePort: 80
Path based routing Host based routing
ingress-rules.yml
Ingress-controller executestheseingress-rulesby comparing with thehttp requestedURL in thehttp header
Kubernetes
Demo: Ingress
connected-factory
.com
connected-city
.com
• 3VMs K8s Cluster + 1 VM for Reverse Proxy
• Deploy Ingress controller
• Deploy pods
• Deploy services
• Deploy Ingress rules
• Configure externalreverse proxy
• Update DNS names
• Access applications using URLs
• connected-city.com
• connected-factory.com
Kubernetes
Demo: Ingress
Architecture
DNS Server
10.11.3.5
HAProxy
server192.168.0.101:30001
server192.168.0.102:30001
server192.168.0.103:30001
k8s cluster node IPs with
ingress controller port
HAProxy Configuration
Kubernetes Cluster
connected-city
.com
Kubernetes
Demo: Ingress
1. Deploy Nginx Ingress Controller
kubectlapply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-
0.32.0/deploy/static/provider/baremetal/deploy
.yaml
2. Deploy pods andservices
apiVersion:
v1 kind:
Service
metadata:
name: connectedcity-
service spec:
ports:
- port: 80
targetPort:
5000 selector:
app: connectedcity
apiVersion:
apps/v1 kind:
Deployment
metadata:
name: connectedcity-
deployment spec:
replicas:
3
selector:
matchLabels:
app:
connectedcity
template:
metadata:
labels:
app:
connectedcity
spec:
containers:
Application-1
Deployment +
ClusterIP service
apiVersion:
v1 kind:
Service
metadata:
name: connectedfactory-
service spec:
ports:
- port: 80
targetPort:
5000 selector:
app: connectedfactory
apiVersion:
apps/v1 kind:
Deployment
metadata:
name: connectedfactory-
deployment spec:
replicas:
3
selector:
matchLabels:
app:
connectedfactory
template:
metadata:
labels:
app:
connectedfactory
spec:
containers:
- name: connectedfactory
Application-2
Deployment +
ClusterIP service
kubectl apply – f<object>.yml
Kubernetes
apiVersion:networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-rules
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
-host: connected-city
.com
http:
paths:
- backend:
serviceName:connectedcity-service
servicePort:80
-host: connected-factory
.com
http:
paths:
- backend:
serviceName:connectedfactory-service
servicePort:80
Demo: Ingress
3. Deploy ingress rules manifestfile
• Host basedrouting rules
• Connectsto various services depending
upon thehost parameter
kubectl apply – f<object>.yml
Kubernetes
Demo: Ingress
4. Deploy HAProxyLoadBalancer
• Provision a VM
• Install HAProxy using package
manager
 apt install haproxy – y
• RestartHAProxy serviceafter
modifying theconfiguration
 systemctl stop haproxy
 add configuration to
/etc/haproxy/haproxy
.cfg
 systemctl starthaproxy &&
systemctl enablehaproxy
/etc/haproxy/haproxy
.cfg
10.11.3.5
Kubernetes
C:WindowsSystem32driversetchosts
192.168.0.105 connected-city
.com
192.168.0.105 connected-factory
.com
ipconfig /flushdns
windows
/etc/hosts
192.168.0.105 flask-app.com
linux
Demo: Ingress
5. Updatedummy DNS entries
Both DNS namesto point to IP of HAProxy server
connected-city.com
connected-factory.com
10.11.3.5
HAProxy
Kubernetes
connected-factory
.com
connected-city
.com
Demo: Ingress
6.AccessApplicationthrough URLs
Kubernetes
Ingress using Network LB
Kubernetes
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
https://devblogs.microsoft.com/premier-developer/bypassing-authentication-for-the-local-kubernetes-cluster-dashboard/
Dashboard
• Defaultlogin access to dashboard is by using token or kubeconfig file.This can bebypassed for internal testing
but not recommendedin production
• Uses NodePort to exposethedashboardoutside theKubernetes cluster
• Change the serviceto ClusterIPand use it in conjunction with ingress resources to make it accessiblethrough a
DNS name(similarto previous demos)
https://github.com/kunchalavikram1427/kubernet
es/blob/master/dashboard/insecure-dashboard-
nodeport.yaml
Kubernetes
Volumes
• By default, container data is stored inside own its file system
• Containers are ephemeral in nature. When they are destroyed, the data inside them gets deleted
• Also when running multiple containers in a Pod it is often necessary to share files between those
Containers
• In order to persist data beyond the lifecycle of pod, Kubernetes provide volumes
• Avolume can be thought of as a directory which is accessible to the containers in a pod
• The medium backing a volume and its contents are determined by the volume type
T
ypes of Kubernetes Volumes
• There are different types of volumes you can use in a Kubernetes pod:
 Node-local memory (emptyDir and hostPath)
 Cloud volumes (e.g., awsElasticBlockStore, gcePersistentDisk, and azureDiskVolume)
 File-sharing volumes, such as Network File System (NFS)
 Distributed-file systems (e.g., CephFS and GlusterFS)
 Special volume types such as PersistentVolumeClaim, secret, configmap and gitRepo
10.244.0.22
Pod
containers
Kubernetes
emptyDir
• emptyDir volume is first createdwhen a Pod is assigned to a
Node
• It is initially empty and has same lifetimeof a pod
• emptyDir volumes arestored on whatever medium is backing
the node - that might bedisk or SSD or network storage or
RAM
• Containers in the Pod can all read and write the same files in
the emptyDir volume
• This volume can bemounted at the same or different paths in
each Container
• When a Pod is removed from a node for any reason, the data
in the emptyDir is deleted forever
• Mainly used to store cache or temporary data to beprocessed 10.244.0.22
container1
container2
emptyDir
Kubernetes apiVersion: v1
kind: Pod
metadata:
name:emptydir-pod
labels:
app:busybox
purpose: emptydir-demo
spec:
volumes:
-name:cache-volume
emptyDir: {}
containers:
-name:container-1
image:busybox
command: ["/bin/sh","-c"]
args: ["date >> /cache/date.txt;sleep1000"]
volumeMounts:
-mountPath: /cache
name:cache-volume
- name:container-2
image: busybox
command: ["/bin/sh","-c"]
args: ["cat /cache/date.txt;sleep 1000"]
volumeMounts:
-mountPath: /cache
name:cache-volume
emptyDir
kubectl apply -f emptyDir-demo.yml
kubectlexec-it pod/emptydir-pod -c container-2 -- cat/cache/date.txt
kubectllogs pod/emptydir-pod -c container-2
Kubernetes
hostPath
• This type of volume mounts a file or directory from the host
node’s filesystem into your pod
• hostPath directory refers to directory created on Node where
pod is running
• Use it with caution because when pods are scheduled on
multiple nodes, each nodes get its own hostPath storage
volume. These may not bein sync with each other and
different pods might beusing a different data
• Let’s say the pod with hostPath configuration is deployed on
Worker node 2.Then host refers to worker node2. So any
hostPath location mentioned in manifest file refers to worker
node 2 only
• When node becomes unstable, the pods might fail to access
the hostPath directory and eventually gets terminated
container2
/data
10.244.0.22
container1
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Kubernetes Node
Kubernetescluster
Kubernetes
apiVersion:v1
kind: Pod
metadata:
name:hostpath-pod
spec:
volumes:
- name:hostpath-volume
hostPath:
path:/data
type:DirectoryOrCreate
containers:
- name:container-1
image:busybox
command:["/bin/sh","-c"]
args: ["ls /cache;sleep1000"]
volumeMounts:
- mountPath:/cache
name:hostpath-volume
hostPath
kubectl apply -f hostPath-demo.yml
kubectllogs pod/hostpath-pod -c container-1
kubectlexec-it pod/hostpath-pod -c container-1 -- ls /cache
Kubernetes
Persistent Volume and Persistent Volume Claim
• Managing storage is a distinct problem inside a cluster
. You cannot rely on emptyDir or hostPath for
persistent data.
• Also providing a cloud volume like EBS, AzureDisk often tends to be complex because of complex
configuration options to be followed for each service provider
• T
oovercome this, PersistentVolume subsystem provides anAPI for users and administrators that
abstracts details of how storage is provided from how it is consumed. T
odo this, K8s offers twoAPI
resources: PersistentVolume and PersistentVolumeClaim.
Kubernetes
Persistent Volume and Persistent Volume Claim
Persistentvolume (PV)
• APersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an
administrator or dynamically provisioned using Storage Classes(pre-defined provisioners and
parameters to create a Persistent Volume)
• Admin creates a pool of PVs for the users to choose from
• It is a cluster-wide resource used to store/persist data beyond the lifetime of a pod
• PV is not backed by locally-attached storage on a worker node but by networked storage system such
as Cloud providers storage or NFS or a distributed filesystem like Ceph or GlusterFS
• Persistent Volumes provide a file system that can be mounted to the cluster
, without being associated
with any particular node
Kubernetes
https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
Persistent Volume and Persistent Volume Claim
Persistent Volume Claim (PVC)
• In orderto use a PV, user needto first claim it using a PVC
• PVC requests a PV with the desired specification(size, speed,etc.) from Kubernetesand then binds it to a
resource(pod, deployment…) as a volume mount
• User doesn’t need to know the underlying provisioning.Theclaims must becreatedin the same namespace
wherethe pod is created.
Kubernetes
Persistent Volume and Persistent Volume Claim
https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
Kubernetes
Using PVCs
Deployments
Kubernetes Cluster
Kubernetes
Using PVCs in GKE
https://medium.com/google-cloud/introduction-to-docker-and-kubernetes-on-gcp-with-hands-on-configuration-part-3-kubernetes-with-eb41f5fc18ae
Kubernetes
Logs
kubectllogs my-pod
kubectllogs -l name=myLabel
kubectllogs my-pod -c my-container
kubectllogs -l name=myLabel -c my-container
kubectllogs -f my-pod
kubectllogs -f my-pod -c my-container
kubectllogs -f -l name=myLabel --all-containers
kubectllogs my-pod -f --tail=1
kubectllogs deploy/<deployment-name>
# dump pod logs (stdout)
# dump pod logs, with labelname=myLabel(stdout)
# dump pod containerlogs (stdout, multi-containercase)
# dump pod logs, with labelname=myLabel (stdout)
# streampod logs (stdout)
# streampodcontainer logs (stdout, multi-containercase)
# stream all pods logs with labelname=myLabel (stdout)
# streamlast line of pod logs
# dump deployment logs
Kubernetes
Interaction with pods
kubectlrun -i --tty busybox--image=busybox -- sh
kubectlrun nginx –it --image=nginx -- bash
kubectlrun nginx --image=nginx --dry-run -o yaml > pod.yaml
kubectlattachmy-pod -i
kubectlexecmy-pod -- ls /
kubectlexecmy-pod -c my-container-- ls /
# Run pod as interactive shell
# Run pod nginx
# Run pod nginx and writeits specinto a filecalled
pod.yaml
# Attach to Running Container
# Run command in existing pod (1 container case)
# Run command in existing pod (multi-container case)
Scheduling
Kubernetes
Scheduling
• Kubernetes users normally don’t need to choose a node to which their Pods should bescheduled
• Instead, the selection of the appropriate node(s) is automatically handled by the Kubernetes scheduler
• Automatic node selection prevents users from selecting unhealthy nodes or nodes with a shortage of
resources
• However
, sometimes manual scheduling is needed to ensure that certain pods only scheduled on nodes
with specialized hardware like SSD storages, or to co-locate services that communicate
frequently(availability zones), or to dedicate a set of nodes to a particular set of users
• Kubernetes offers several ways to manual schedule the pods. In all the cases, the recommended
approach is to use label selectors to make the selection
• Manual scheduling options include:
1. nodeName
2. nodeSelector
3. Node affinity
4. T
aints and T
olerations
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
• nodeName
• nodeName is a field of PodSpec
• nodeName is the simplest form of node selection constraint, but due to its limitations it is typically not
used
• When scheduler finds no nodeName property
, it automatically adds this and assigns the pod to any
available node
• Manually assign a pod to a node by writing the nodeName property with the desired node name.
• We can also schedule pods on Master by this method
• Some of the limitations of using nodeName to select nodes are:
 If the named node does not exist, the pod will not be run, and in some cases may be
automatically deleted
 If the named node does not have the resources to accommodate the pod, the pod will fail and its
reason will indicate why, for example OutOfmemory or OutOfcpu
 Node names in cloud environments are not always predictable or stable
Kubernetes
Scheduling
Kubernetes
Scheduling
node Name
apiVersion:v1
kind:Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
nodeName: k8s-master
nodeName.yml
kubectl apply – f nodeName.yml
Kubernetes
Scheduling
node Selector
• nodeSelector is a field of PodSpec
• It is the simplest recommended form of node selection constraint
• It uses labels(key-value pairs) to select matching nodes onto which
pods can bescheduled
• Disadvantage with nodeSelector is it uses hard preferences i.e.,if
matching nodes are not available pods remain in pending state!
check default node labels
kubectl describe node <node-name>
Kubernetes
Scheduling
node Selector
Add labels to nodes
kubectl label nodes <node-name> <label-key>=<label-value>
kubectl label nodes k8s-slave01 environment=dev
deletea label: kubectllabel node <nodename> <labelname> -
Kubernetes
Scheduling
node Selector
apiVersion:v1
kind:Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
environment: dev
nodeSelector
.yml
kubectl apply – f nodeSelector
.yml
kubectl get pods – o wide --show-labels
kubectl describe pod <pod-name>
Kubernetes
Scheduling
node Affinity
Node affinityis specifiedas fieldnodeAffinityin PodSpec
• Node affinity is conceptually similar to nodeSelector – itallows youtomanually schedule pods basedon
labels on thenode.But ithas fewkeyenhancements:
• nodeAffinityimplementationis more expressive.Thelanguage offers more matching rules
besides exactmatches createdwith a logicalAND operationin nodeSelector
• Rules are soft preferencesrather than hard requirements, so ifthescheduler can’tfinda
node with matching labels, thepodwillstill bescheduled on other nodes
Therearecurrently two types ofnode affinityrules:
1. requiredDuringSchedulingIgnoredDuringExecution: Hard requirementlikenodeSelector
.No
matching nodelabel,no podscheduling!
2. preferredDuringSchedulingIgnoredDuringExecution: Soft requirement. No matching node label,podgets scheduled
on other nodes!
TheIgnoredDuringExecution partindicates that if labels on a nodechange atruntimesuch that the
affinityrules on a podareno longer met,thepodwill still continue torun on thenode.
Kubernetes
Scheduling
node Affinity
apiVersion:v1
kind: Pod
metadata:
name:with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorT
erms:
- matchExpressions:
- key:environment
operator:In
values:
- prod
containers:
- name:nginx-container
image:nginx
nodeAffinity
.yml
kubectl apply – f nodeAffinity
.yml
• Pod gets scheduled on the node has the label
environment=production
• If noneofthe nodes has this label, pod remains
in pending state.
• T
oavoid this, use affinity
preferredDuringSchedulingIgnoredDuringExecu
tion
Kubernetes
Scheduling
T
aints and T
olerations
• Node affinity
, is a property of Pods that attracts them to a set of nodes (either as a preferenceor a
hard requirement)
• Taints are the opposite – they allow a node to repel a set of pods.
• T
aints areapplied to nodes(lock)
• T
olerations areappliedto pods(keys)
• In short, pod should tolerate node’s taint in order to run in it. It’s like having a correct key with pod to
unlock the node to enter it
• Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes
Pod
K8s Node
Kubernetes
Scheduling
T
aints and T
olerations
• By default, Master node is tainted. So you cannot deploy any pods on Master
• T
o check taints applied on any node use kubectl describe node <node-name>
Apply taint to nodes
kubectl taint nodes <node-name> key=value:<taint-effect>
• taint’s key and value can be any arbitrary string
• taint effectshould be one of the supported taint effects such as
1. NoSchedule: no pod will be able to schedule onto node unless it has a matching
toleration.
2. PreferNoSchedule: soft version of NoSchedule. The system will try to avoid placing a
pod that does not tolerate the taint on the node, but it is not required
3. NoExecute: node controller will immediately evict all Pods without the matching
toleration from the node, and new pods will not be scheduled onto the node
Kubernetes
Scheduling
T
aints and T
olerations
Apply taint to nodes
kubectl taint nodes k8s-slave01 env=stag:NoSchedule
• In the above case, node k8s-slave01 is tained with label env=stag and taint effectas
NoSchedule. Only pods that matches this taint will be scheduled onto this node
Check taints on nodes
kubectl describenode k8s-slave01 |grep -i taint
Kubernetes
Scheduling
T
aints and T
olerations
Apply tolerations to pods
kubectl apply – f taint_toleration.yml
kubectl get pods – o wide
apiVersion: apps/v1
kind: Deployment
metadata:
name:nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app:myapp
template:
metadata:
name:myapp-pod
labels:
app:myapp
spec:
containers:
-name:nginx-container
image:nginx
tolerations:
-key:"env"
operator: "Equal"
value: "stag"
effect: "NoSchedule“
taint_toleration.yml
• Here pods are scheduled onto both the slave nodes.
• Only slave01 is taintedhere and matchingtolerations are added
to pods. So pods are scheduled onto slave-01 as well.
• If we removethe tolerations from the pods and deploy them,
they will get scheduled onto slave-02 only as slave01 is tainted
and matching toleration is removed/notavailable with pods!
References
•
•
•
•
•
•
• Docker Installationon Ubuntu
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04 K3s Installation
https://k33g.gitlab.io/articles/2020-02-21-K3S-01-CLUSTER.html
https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c Kubernetes 101
https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/
Kubeadm
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ k3s
https://rancher.com/docs/ Kubectl commands
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
https://kubernetes.io/docs/reference/kubectl/overview/
Deployments
https://www.bmc.com/blogs/kubernetes-deployment/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
• Services
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other
https://theithollow.com/2019/02/05/kubernetes-service-publishing/
https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/
https://medium.com/@JockDaRock/metalloadbalancer-kubernetes-on-prem-baremetal-loadbalancing-101455c3ed48
https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e
• Ingress https://www.youtube.com/watch?v=QUfn0EDMmtY&list=PLVSHGLlFuAh89j0mcWZnVhfYgvMmGI0lF&index=18&t=0s
• K8s Dashboard
https://github.com/kubernetes/dashboard https://github.com/indeedeng/k8dash
• YAML https://kubeyaml.com/
01. Kubernetes-PPT.pptx

More Related Content

01. Kubernetes-PPT.pptx

  • 2. What is Kubernetes? • Kubernetes is an orchestration engine and open-source platform for managing containerized applications. • Responsibilities include container deployment, scaling & descaling of containers & container load balancing. • Actually, Kubernetes is not a replacement for Docker, But Kubernetes can be considered as a replacement for Docker Swarm, Kubernetes is significantly more complex than Swarm, and requires more work to deploy. • Born in Google ,written in Go/Golang. Donated to CNCF(Cloud native computing foundation) in 2014. • Kubernetes v1.0 was released on July 21, 2015. • Current stable release v1.18.0.
  • 3. Kubernetes Features The features of Kubernetes, are as follows: •Automated Scheduling: Kubernetes provides advanced scheduler to launch container on cluster nodes based on their resource requirements and other constraints, while not sacrificing availability. •Self Healing Capabilities: Kubernetes allows to replaces and reschedules containers when nodes die. It also kills containers that don’t respond to user-defined health check and doesn’t advertise them to clients until they are ready to serve. •Automated rollouts & rollback: Kubernetes rolls out changes to the application or its configuration while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, with Kubernetes you can rollback the change. •Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale down the application as per the requirements with a simple command, using a UI, or automatically based on CPU usage.
  • 4. Kubernetes Features 5. Service Discovery & Load balancing With Kubernetes, there is no need to worry about networking and communication because Kubernetes will automatically assign IP addresses to containers and a single DNS name for a set of containers, that can load-balance traffic inside the cluster. Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing. 6. Storage Orchestration With Kubernetes, you can mount the storage system of your choice. You can either opt for local storage, or choose a public cloud provider such as GCP or AWS, or perhaps use a shared network storage system such as NFS, iSCSI, etc.
  • 5. Container orchestration • Container orchestration automates the deployment,management, scaling, and networking of containers across the cluster . It is focused on managing the life cycle of containers. • Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration. • Container orchestration is used to automate the following tasks at scale:  Configuring and scheduling of containers  Provisioning and deployment of containers  Redundancy and availability of containers  Scaling up or removing containers tospread application load evenly across host infrastructure  Movement ofcontainers from one host to another if there is a shortage of resources in a host, or if a host dies  Allocation of resources between containers  External exposure of services running in a container with the outside world  Load balancing of service discovery between containers  Health monitoring of containers and hosts
  • 6. Swarm vs Kubernetes Features Kubernetes Docker Swarm Installation & Cluster Configuration Installation is complicated; but once setup, the cluster is very strong Installation is very simple; but cluster is not very strong GUI GUI is the Kubernetes Dashboard There is no GUI Scalability Highly scalable & scales fast Highly scalable & scales 5x faster than Kubernetes Auto-Scaling Kubernetes can do auto-scaling Docker Swarm cannot do auto- scaling Rolling Updates & Rollbacks Can deploy Rolling updates & does automatic Rollbacks Can deploy Rolling updates, but not automatic Rollbacks Data Volumes Can share storage volumes only with other containers in same Pod Can share storage volumes with any other container Logging & Monitoring In-built tools for logging & monitoring 3rd party tools like ELK should be used for logging & monitoring Both Kubernetes and Docker Swarm are important tools that are used to deploy containers inside a cluster but there are subtle differences between the both
  • 7. Kubernetes • Kubernetes also known as K8s, is an open-source Container Management tool • It provides a container runtime, container orchestration, container-centric infrastructure orchestration, self-healing mechanisms, service discovery, load balancing and container (de)scaling. • Initially developed by Google, for managing containerized applications in a clustered environment but later donated to CNCF • Written in Golang • It is a platform designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability.
  • 8. Kubernetes https://www.cncf.io/certification/software-conformance/ Certified Kubernetes Distributions • Cloud Managed:EKS byAWS, AKS by Microsoft and GKE by google • Self Managed: OpenShift by Redhat and Docker Enterprise • Local dev/test: Micro K8s by Canonical, Minikube • Vanilla Kubernetes: The core Kubernetes project(baremetal), Kubeadm • Special builds: K3s by Rancher , a light weight K8s distribution for Edge devices Online Emulator: https://labs.play-with-k8s.com/
  • 9. KubernetesCluster AKubernetes cluster is a set of physical or virtual machines and other infrastructure resources that are needed to run your containerized applications. Each machine in a Kubernetes cluster is called a node. There are two types of node in each Kubernetes cluster: Master node(s): hosts the Kubernetes control plane components and manages the cluster Worker node(s):runs your containerized applications worker worker Master
  • 10. KubernetesArchitecture ETCD Scheduler Controller Master API Server Kube-proxy Kubelet Docker Pods Containers Kubelet Kube-proxy Docker Pods Containers Worker 01 Worker 02 kubectl Web UI
  • 12. KubernetesArchitecture Kubernetes Master • Master is responsible for managing the complete cluster . • You can access master node via the CLI, GUI, orAPI • Themaster watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes • For achieving fault tolerance, there can be more than one master node in the cluster . • It is the access point from which administrators and other users interact with the cluster to manage the scheduling and deployment ofcontainers. • It has four components: ETCD,Scheduler ,Controller andAPI Server
  • 13. KubernetesArchitecture Kubernetes Master ETCD • ETCD is a distributed reliable key-value store used by Kubernetes to store all data used to manage the cluster . • When you have multiple nodes and multiple masters in your cluster , etcd stores all that information on all the nodes in the cluster in a distributed manner . • ETCD is responsible for implementing locks within the cluster to ensure thereareno conflicts between the Masters Scheduler • The scheduler is responsible for distributing work or containers across multiple nodes. • It looks for newly createdcontainers and assigns them to Nodes.
  • 14. KubernetesArchitecture Kubernetes Master API server manager • Masters communicate with the rest of the cluster through the kube-apiserver , the main access point to the control plane. • It validates and executes user’s REST commands • kube-apiserver also makes sure that configurations in etcd match with configurations of containers deployed in the cluster . Controller manager • The controllers arethe brain behind orchestration. • They are responsible for noticing and responding when nodes, containers or endpoints goes down. Thecontrollers makes decisions to bring up new containers in such cases. • The kube-controller-manager runs control loops that manage the state of the cluster by checking if the required deployments, replicas, and nodes are running in the cluster
  • 15. KubernetesArchitecture Kubernetes Master Kubectl • kubectl is the command line utility using which we can interact with k8s cluster • UsesAPIs providedbyAPI server to interact. • Also known as the kubecommand line tool or kubectl or kubecontrol. • Used to deploy and manage applications on a Kubernetes • kubectl run nginx used to deploy an application on the cluster . • kubectl cluster-info used to view information about the cluster and the • kubectl get nodes used to list all the nodes part of the cluster .
  • 16. KubernetesArchitecture Kubernetes Worker Kubelet • Worker nodes have the kubelet agent that is responsible for interacting with the master to provide health information of the worker node • T ocarry out actions requested by themaster on the worker nodes. Kubeproxy • The kube-proxy is responsible for ensuring network traffic is routed properly to internal and external services as required and is based on the rules definedby network policies in kube-controller-manager and other custom controllers.
  • 17. Kubernetes What is K3s? • K3s is a fully compliant Kubernetes distribution with the following enhancements:  Packagedas a single binary  <100MB memory footprint  SupportsARM and x86 architectures  Lightweight storage backend based on sqlite3 as the default storage mechanism to replace heavier ETCD server  Docker is replaced in favour ofcontainerd runtime  Inbuilt Ingress controller (Traefik)
  • 19. Kubernetes K3s Setup using VirtualBox • Use 3VMs(1 master and 2 workers).All VMs should have bridge network adapterenabled • Createa host only networkingadapter(DHCP disabled) and connectall VMs to it.This is to have static IPs for allVMs in thecluster .Make sure static IPs areconfigured in eachVM in thesame subnet range ofhost only network On Master • bash -c "curl -sfL https://get.k3s.io|sh -“ • TOKEN=cat /var/lib/rancher/k3s/server/node-token • IP = IP of masternode whereAPI server is running On Worker nodes • bash -c "curl -sfL https://get.k3s.io|K3S_URL="https://$IP:6443" K3S_TOKEN="$TOKEN" sh -" https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
  • 21. Kubernetes Pods Pods areoften ephemeral • Basic scheduling unit in Kubernetes. • Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level structure calleda pod • It is also the smallest deployableunit that can be created, schedule, and managed on a Kubernetes cluster . Each pod is assigned a unique IP address within the cluster . • Pods can hold multiple containers as well, but you should limit yourself when possible. Because pods are scaled up and down as a unit, all containers in a pod must scale together , regardless of their individual needs. This leads to wasted resources. Ex:nginx,mysql, wordpress.. 10.244.0.22 containers
  • 22. Kubernetes 10.244.0.22 Pods • Any containers in the same pod will share the same storage volumes and network resources and communicate using localhost • K8s uses YAML to describe the desired state of the containers in a pod.This is also called a Pod Spec. These objects are passed to the kubelet through theAPI server . • Pods are used as the unit of replication in Kubernetes. If your application becomes too popular and a single pod instance can’t carry the load, Kubernetes can be configured to deploy new replicas of your pod to the cluster as necessary. containers Insidecluster Using theexamplefromtheabovefigure, you could run curl 10.1.0.1:3000 to communicatetotheonecontainerand curl10.1.0.1:5000 tocommunicatetothe othercontainerfromother pods.However, ifyou wanted totalkbetween containers - forexample, callingthetopcontainerfromthebottomone,you could usehttp://localhost:3000.
  • 23. Kubernetes Scaling Pods • All containers within the pod getscaled together . • You cannot scale individual containers within the pods. Thepod is the unit of scale in K8s. • Recommended way is to have only one container per pod. Multi container pods are very rare. • In K8s, initcontaineris sometimes used as a second container inside pod. initcontainers are exactlylike regular containers,exceptthat theyalways run to completion.Each initcontainermust completesuccessfullybeforethenextone starts. If a Pod’s init containerfails,KubernetesrepeatedlyrestartsthePod untiltheinitcontainersucceeds
  • 24. Kubernetes Imperative vs Declarative commands • KubernetesAPI defines a lot of objects/resources, such as namespaces, pods, deployments, services, secrets, config maps etc. • There are two basic ways to deploy objects in Kubernetes: Imperatively and Declaratively Imperatively • Involves using any of the verb-based commands like kubectl run, kubectl create,kubectl expose,kubectl delete,kubectl scale and kubectl edit • Suitable for testing and interactive experimentation Declaratively • Objects are written in YAML files and deployed using kubectl create or kubectl apply • Best suited for production environments
  • 25. Kubernetes Manifest /Spec file • K8s objectconfiguration files - Written in YAML or JSON • They describe the desired state of your application in terms of KubernetesAPI objects.Afile can include one or moreAPI object descriptions (manifests). # manifest filetemplate apiVersion - version of the KubernetesAPI used to create the object kind - kind of objectbeing created metadata - Data that helps uniquely identify the object,including a name and optional namespace spec - configuration that defines the desired for the object apiVersion: v1 kind:Pod metadata: name:… spec: containers: - name:… --- apiVersion: v1 kind:Pod metadata: name:… spec: containers: - name:… Multiple resource definitions
  • 26. Kubernetes Manifest files Man Pages apiVersion: v1 kind:Pod metadata: name:… spec: containers: - name:… --- apiVersion: v1 kind:Pod metadata: name:… spec: containers: - name:… Multiple resource definitions List all K8s API supported Objects and Versions kubectl api-resources kubectl api-versions Man pages for objects kubectl explain <object>.<option> kubectl explain pod kubectl explain pod.apiVersion kubectl explain pod.spec
  • 27. Kubernetes Once the cluster is setup… kubectl version kubectl get nodes – o wide
  • 28. Kubernetes Once the cluster is setup… kubectl cluster-info kubectlcluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state
  • 29. Kubernetes Creating Pods kubectl run <pod-name> --image <image-name> kubectl run nginx --image nginx --dry-run=client kubectl run nginx --image nginx --dry-run=client –o yaml https://kubernetes.io/docs/reference/kubectl/cheatsheet/ dry-run doesn’trun the commandbutwillshow what the changes the command woulddo tothe cluster shows thecommandoutputin YAML. Shortcuttocreatea declarative yamlfrom imperative commands
  • 30. Kubernetes Creating Pods: Imperative way kubectl run test --image nginx --port 80 - Also exposes port 80 of container kubectl getpods – o wide kubectl describe pod test – display extended information of pod container 10.244.2.2
  • 32. Kubernetes Creating Pods:Declarative way • kubectl create –f pod-definition.yml • kubectlapply –f pod-definition.yml – ifmanifest fileis changed/updated after deployment and need to re-deploy the pod again • kubectl deletepod <pod-name> # pod-definition.yml apiVersion:v1 kind:Pod metadata: name:nginx-pod labels: app:webapp spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80
  • 34. Kubernetes Replication Controller • Asingle pod may not besufficient to handle the user traffic.Also if this only pod goes down because of a failure, K8s will not bring this pod up again automatically • In order to prevent this, we would like to have more than one instance or POD running at the same time inside the cluster • Kubernetes supports different controllers(Replicacontroller &ReplicaSet) to handle multiple instances of a pod. Ex: 3 replicas of nginx webserver • Replication Controller ensures high availability by replacing the unhealthy/deadpods with a new one to ensure required replicas are always running inside a cluster • So, does that mean you can’t use a replication controller if you plan to have a single POD? No! Even if you have a single POD, the replication controller can help by automatically bringing up a new POD when the existing one fails. • Another reason we need replication controller is to create multiple PODs to share the load across them. • Replica controller is deprecated and replaced by Replicaset
  • 35. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 ETCD API Server Scheduler Controller Kubelet When a command is given through kubectl Kubelet
  • 36. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 ETCD API Server Scheduler Controller Kubelet API Serverupdates the deployment details in ETCD Kubelet deployment
  • 37. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 ETCD API Server Scheduler Controller Kubelet Controller managerthrough API Server identifies its workload and createsa ReplicaSet Kubelet deployment replicaset
  • 38. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 API Server Scheduler Controller Kubelet ReplicaSet createsrequired numberof pods andupdates the ETCD. Note thestatus of pods.They are still in PENDINGstate Kubelet deployment replicaset pod PENDING ETCD pod PENDING pod PENDING
  • 39. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 API Server Scheduler Controller Kubelet Scheduler identifiesits workload through API-Server and decides the nodes onto which the podare to bescheduled. At this stage,pods areassigned to a node Kubelet deployment replicaset pod slave01 ETCD pod slave02 pod slave01
  • 40. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 API Server Scheduler Controller Kubelet Kubeletidentifiesits workload through API-Server and understands thatit needs to deploy somepods on its node Kubelet deployment replicaset pod slave01 ETCD pod slave02 pod slave01
  • 41. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 API Server Scheduler Controller Kubelet Kubeletinstructs the docker daemon to createthe pods.At the same timeit updates the status as ‘Pods CREATING’ in ETCD throughAPI Server Kubelet deployment replicaset pod CREATING ETCD pod CREATING pod CREATING Pod Pod Pod
  • 42. Kubernetes Behind the scene… K8s Master K8s slave 01 K8s slave 02 $ kubectl run nginx – image=nginx – replicas=3 API Server Scheduler Controller Kubelet Once pods are created and run, Kubelet updates the pod status as RUNNING in ETCD through API Server Kubelet deployment replicaset pod RUNNING ETCD pod RUNNING pod RUNNING Pod Pod Pod
  • 43. Kubernetes Labels and Selectors Labels • Labels are key/value pairs that are attached to objects, such as pods • Labels allows to logically group certain objects by giving various names to them • You can label pods, services, deployments and even nodes kubectl get pods -l environment=production kubectl getpods -l environment=production, tier=frontend https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  • 44. Kubernetes https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ Labels and Selectors • If labels are not mentioned while deploying k8s objects using imperative commands, the label is auto set as app: <object-name> kubectl run --imagenginx nginx kubectl get pods --show-labels Adding Labels kubectl label pod nginx environment=dev
  • 45. Kubernetes Labels and Selectors Selectors • Selectors allows to filter the objects based on labels • TheAPI currently supports two types of selectors: equality-based and set-based • Alabel selector can be made of multiple requirements which are comma-separated Equality-based Selector • Equality- or inequality-based requirements allow filtering by label keys and values. • Threekinds of operators are admitted =,==,!= Used by Replication Controllers and Services
  • 46. Kubernetes Labels and Selectors Selectors • Selectors allows to filter the objects based on labels • TheAPI currently supports two types of selectors: equality-based and set-based • Alabel selector can be made of multiple requirements which are comma-separated Set-based Selector • Set-based label requirements allow filtering keys according to a set ofvalues. • Threekinds of operators are supported: in,notin and exists (only the key identifier). kubectlget pods -l 'environment in (production, qa)' Used by ReplicaSets, Deployments, DaemonSets
  • 47. Kubernetes ReplicaSet • ReplicaSets are a higher-levelAPI that gives the ability to easily run multiple instances of a given pod • ReplicaSets ensures that the exact number of pods(replicas) are always running in the cluster by replacing any failed pods with new ones • Thereplica count is controlled by the replicas field in the resource definition file • Replicaset uses set-based selectors whereas replicacontroller uses equality based selectors
  • 48. Kubernetes Pod vs ReplicaSet apiVersion:apps/v1 kind:ReplicaSet metadata: name: nginx-replicaset labels: app:webapp type:front-end spec: replicas: 3 selector: matchLabels: app:webapp template: metadata: name: nginx-pod labels: app:webapp spec: containers: - name: nginx-container image:nginx ports: - containerPort:80 apiVersion:v1 kind:Pod metadata: name:nginx-pod labels: app: webapp spec: containers: - name:nginx-container image:nginx ports: - containerPort:80 pod.yml replicaset.yml
  • 50. Kubernetes apiVersion:apps/v1 kind:ReplicaSet metadata: name: nginx-replicaset labels: app:webapp type:front-end spec: replicas: 3 selector: matchLabels: app:webapp template: metadata: name: nginx-pod labels: app:webapp spec: containers: - name: nginx-container image:nginx ports: - containerPort:80 ReplicaSet kubectl create – freplica-set.yml kubectl get rs – o wide kubectl get pods – o wide
  • 51. Kubernetes ReplicaSet • kubectl edit replicaset <replicaset-name> - edit a replicaset; like image, replicas • kubectl deletereplicaset <replicaset-name> - deletea replicaset; like image, replicas • kubectl delete-freplica-set.yml • kubectl get all - get pods, replicasets,deployments, services all in one shot • kubectl replace -f replicaset-definition.yml -replaces the pods with updated definition file • kubectl scale -–replicas=6 –f replicaset-definition.yml – scale using definition file • kubectl scale -–replicas=6 replicaset <replicaset-name> - using name of replicaset
  • 53. Kubernetes Deployment • ADeployment provides declarative updates for Pods and ReplicaSets. • You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. • It seems similar to ReplicaSets but with advanced functions • Deployment is the recommended way to deploy a pod or RS • By default Kubernetes performs deployments in rolling update strategy. • Below aresome of the key features of deployment:  Easily deploy a RS  Rolling updates pods  Rollback to previous deployment versions  Scale deployment  Pause and resume deployment
  • 54. Kubernetes Deployment Strategy • Whenever wecreate a new deployment,K8s triggers a Rollout. • Rollout is the process of gradually deploying or upgrading your application containers. • For every rollout/upgrade, a version history will be created, which helps in rolling back to working version in case of an update failure • In Kubernetes there are a few different ways to release updates to an application • Recreate: terminate the old version and release the new one.Application experiences downtime. • RollingUpdate: release a new version on a rolling update fashion, one after the other . It’s the default strategy in K8s. No application downtime is required. • Blue/green: release a new version alongside the old version then switch traffic spec: replicas: 10 strategy: type:RollingUpdate rollingUpdate: maxSurge:2 maxUnavailable:0 spec: replicas: 10 strategy: type:Recreate
  • 55. Kubernetes Rolling Update Strategy • By default,deploymentensures that only 25% of your pods are unavailable during an update and does not updatemore that 25%of thepods at a given time • It does not kill old pods until/unless enough new pods comeup • It does not create new pods until a sufficientnumberof old pods are killed • Thereare two settings you can tweak to controltheprocess: maxUnavailableand maxSurge.Both havethe defaultvalues set - 25% • ThemaxUnavailablesetting specifies the maximum numberof pods that can beunavailableduring therollout process.You can set it to an actualnumber(integer)or a percentageof desired pods Let’ssay maxUnavailable is setto40%.When theupdatestarts,theoldReplicaSetis scaleddownto60%. As soon as new podsarestartedand ready ,theoldReplicaSetis scaleddownagainand thenewReplicaSetis scaledup.This happensinsuch a waythatthetotalnumber ofavailablepods(oldand new ,since wearescaling upand down) is alwaysatleast 60%. • ThemaxSurge setting specifies the maximum numberof pods that can becreatedoverthe desired numberof pods If we usethesame percentage as before(40%), thenew ReplicaSetis scaledup right awaywhen therolloutstarts.Thenew ReplicaSetwillbescaledupinsuch a waythatitdoesnot exceed140% ofdesiredpods.As oldpods getkilled,thenew ReplicaSet scales upagain,making sure itnevergoes over the140% ofdesiredpods
  • 56. Kubernetes Deployments • kubectl create deployment nginx --imagenginx --dry-run -o yaml • kubectl create -f deployment.yml --record (--record is optional, it just records theevents in thedeployment) • kubectl get deployments apiVersion:apps/v1 kind:Deployment metadata: name: nginx-deployment labels: app:nginx spec: replicas: 10 selector: matchLabels: app:nginx template: metadata: labels: app:nginx spec: containers: - name: nginx-container image:nginx ports: - containerPort:80
  • 57. Kubernetes Deployments • kubectl describedeployment <deployment-name> apiVersion:apps/v1 kind:Deployment metadata: name: nginx-deployment labels: app:nginx spec: replicas: 10 selector: matchLabels: app:nginx template: metadata: labels: app:nginx spec: containers: - name: nginx-container image:nginx ports: - containerPort:80
  • 58. Kubernetes Deployments • kubectl get pods – o wide • kubectl edit deployment <deployment -name> - perform live edit of deployment • kubectl scale deployment <deployment -name> --replicas2 • kubectl apply –f deployment.yml – redeploya modified yaml file;Ex: replicas changed to 5, imageto nginx:1.18 apiVersion:apps/v1 kind:Deployment metadata: name: nginx-deployment labels: app:nginx spec: replicas: 10 selector: matchLabels: app:nginx template: metadata: labels: app:nginx spec: containers: - name: nginx-container image:nginx ports: - containerPort:80
  • 59. Kubernetes Deployments • kubectl rollout status deployment <deployment -name> • kubectl rollout history deployment <deployment -name>
  • 60. Kubernetes Deployments • kubectl rollout undo deployment <deployment -name> • kubectl rollout undo deployment <deployment -name> --to-revision=1 • kubectl rollout pause deployment <deployment -name> • kubectl rollout resume deployment <deployment -name> • kubectl delete -f <deployment-yaml-file> - deletesdeploymentand relateddependencies • kubectl delete all --all – deletespods, replicasets, deployments and services in current namespace
  • 61. Kubernetes Namespaces Namespaces are Kubernetes objects which partition a single Kubernetes cluster into multiple virtual clusters • Kubernetes clusters can managelargenumbers of unrelated workloads concurrently and organizationsoften choose to deploy projects createdby separateteamsto shared clusters. • With multiple deployments in a single cluster , thereare high chancesof deleting deployments belong to deffprohjects. • So namespacesallow you to group objects togetherso you can filter andcontrolthemas a unit/group. • Namespacesprovide a scope for names. Names of resources need to beunique within a namespace, but not across namespaces. • So each Kubernetes namespaceprovides the scope for Kubernetes Names itcontains;which means that using the combinationof an objectnameand a Namespace,each objectgets a unique identity across the cluster
  • 62. Kubernetes Namespaces By default, a Kubernetes cluster is created with the following threenamespaces: • default: It’s a default namespace for users. By default, all the resource created in Kubernetes cluster are created in the default namespace • Kube-system: It is the Namespace for objects created by Kubernetes systems/control plane.Any changes to objects in this namespace would cause irreparable damageto the cluster itself • kube-public: Namespace for resources that are publicly readable by all users. This namespace is generally reserved for cluster usage like Configmaps and Secrets
  • 63. Kubernetes kubectl get all -n kube-system (lists available objects under a specific namespace) Namespaces kubectl getnamespaces kubectl get all --all-namespaces (lists available objects under all available namespaces)
  • 64. Kubernetes Namespaces Create a namespace kubectl create ns dev # Namespace for Developer team kubectl create ns qa # Namespace for QAteam kubectl create ns production # Namespace for Production team Deploy objects in a namespace kubectl run nginx --image=nginx -n dev kubectl get pod/nginx – n dev kubectl apply --namespace=qa -f pod.yaml Deletea namespace kubectl deletens production
  • 66. Kubernetes Services • Services logically connect pods across the cluster to enable networking between them • Thelifetime of an individual pod cannot be relied upon; everything from their IP addresses to their very existence are prone to change. • Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an issue and dies, it’s Kubernetes’jobto replace it so that the application doesn’t experience any downtime • Services makes sure that even after a pod(back-end) dies because of a failure, the newly created pods will be reached by its dependencypods(front-end) via services. In this case, front-end applications always find the backend applications via a simple service(using service name or IP address) irrespective of their location in the cluster • Services point to pods directly using labels. Services do not point to deployments or ReplicaSets. So, all pods with the same label gets attached to same service • 3 types: ClusterIP , NodePort and LoadBalancer
  • 67. Kubernetes Pods’lifecycle are erratic; they comeand go by Kubernetes’will. Not healthy? Killed. Not in theright place?Cloned, andkilled. So how can you send a request to your application if you can’t know for sure whereit lives? Theanswerlies in services. Services are tied to the pods using pod labels and provides a stable end point for the users to reachthe application. When requesting your application,you don’t care about its location or about which pod answers the request. Services
  • 68. • ClusterIP service is the default Kubernetes service. • It gives you a service inside your cluster that other apps inside your cluster can access • It restricts access to the application within the cluster itself and no external access • Useful when a front-end app wants to communicate with back-end • Each ClusterIP service gets a unique IP address inside the cluster • Similar to --links in Docker Kubernete s Services ClusterIP Services point to pods directly using labels!!! https://kubernetes.io/docs/concepts/services-networking/service/
  • 69. Kubernetes When services are not available • Imagine 2 pods on 2 separate nodes node-1 & node-2 with their local IP address • pod-nginx can ping and connect to pod-python using its internal IP 1.1.1.3.
  • 70. Kubernetes When services are not available • Now let’s imagine the pod-python dies and a new one is created. • Now pod-nginx cannot reach pod- python on 1.1.1.3 because its IP is changed to 1.1.1.5. How do we remove this dependency?
  • 71. Kubernetes Enter services… • Services logically connects pods together • Unlike pods, a serviceis not scheduled on a specific node. It spans across the cluster • Pod-nginx can always safely connect to pod-python using serviceIP 1.1.10.1 or the DNS name of service(service- python) • Even if the python pod gets deleted and recreated again, nginx pod can still reach python pod using the service but not with IP of python pod directly
  • 74. apiVersion:v1 kind:Service metadata: name:ingress-nginx spec: type:ClusterIP ports: - name:http port:80 targetPort: 80 protocol: TCP selector: app:nginx-backend apiVersion:v1 kind:Pod metadata: name:backend-pod labels: app: nginx-backend spec: containers: - name:nginx-container image:nginx ports: - containerPort:80 Nginx Alpine 10.244.0.22 10.244.0.24 Pod Pod ingress-nginx (ClusterIP) 10.20.0.18 80 clusterservice.yml pod.yml 80 targetPort port Kubernetes Services ClusterIP
  • 75. apiVersion:v1 kind:Service metadata: name:ingress-nginx spec: type:ClusterIP ports: - name:http port:80 targetPort: 80 protocol: TCP selector: app:nginx-backend apiVersion:v1 kind:Pod metadata: name:backend-pod labels: app: nginx-backend spec: containers: - name:nginx-container image:nginx ports: - containerPort:80 clusterip-service.yml pod.yml kubectl create – f clusterservice.yml kubectl create – f pod.yml root@alpine:# curl ingress-nginx check the endpoints: kubectl describe svc/<svc-name> Kubernetes Services ClusterIP
  • 76. Kubernetes Services NodePort • NodePort opens a specific port on all the Nodes in the cluster and forwards any traffic that is received on this port to internal services • Useful when front end pods areto beexposedoutside the cluster for users to access it • NodePort is build on top of ClusterIP service by exposing the ClusterIP service outside of the cluster • NodePort must be within the port range30000- 32767 • If you don’t specify this port, a random port will be assigned. It is recommended to let k8s auto assign this port
  • 77. Kubernetes spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort:30080 80 NodePort
  • 81. Kubernetes NodePort • Application can be reached from any of the available nodes in the cluster using <node-ip>:<node-port>
  • 82. Kubernetes NodePort apiVersion:v1 kind:Service metadata: name:nodeport-service spec: type:NodePort ports: - port:80 targetPort:80 nodePort:30001 protocol:TCP selector: app:nginx-frontend apiVersion:v1 kind:Pod metadata: name:nginx-frontend labels: app:nginx-frontend spec: containers: - name:nginx-container image:nginx ports: - containerPort:80 nodeport-service.yml pod.yml Nginx 10.244.1.66 Node 03 ingress-nginx (NodePort) 10.105.32.217 80 80 targetPort port 192.168.0.2 30001 NodePort Master/Worker kubectlcreate –f nodeportservice.yml kubectl create– fpod.yml
  • 83. Kubernetes Demo: NodePort kubectl create – f nodeport- service.yml kubectl create – f pod.yml kubectl get services
  • 85. Kubernetes Demo:NodePort kubectl get nodes – o wide 192.168.0.107:30001 192.168.0.107:30001
  • 86. Kubernetes NodePort Limitations • In NodePort service, users can access application using the URL http://<node-ip>:<node-port> • In Production environment, we do not want the users to have to type in the IP address every timeto access the application • So we configure a DNS server to point to the IP of the nodes.Users can now access the application using the URL http://xyz.com:30001 • Now, we don’t want the user to have to remember port number either . • However , NodePort service can only allocate high numbered ports which are greater than 30,000. • So we deploy a proxy server between the DNS server and the cluster thatproxies requests on port 80 to port 30001 on the nodes. • We then point the DNS to proxy server’s IP , and users can now access the application by simply visiting http://xyz.com spec: type:NodePort ports: - port:80 targetPort:5000 nodePort:30001 protocol:TCP NodePort30001 is being used onlyfor demo.Youcan configurethis port numberin servicemanifestfileor letK8s autoassignfor you.
  • 87. Kubernetes Services Load Balancer • ALoadBalancer service is the standard way to exposea Kubernetes service to the internet • On GKE(Google Kubernetes Engine), this will spin up a Network Load Balancer that will give you a single IP address that will forward all external traffic to your service • All traffic on the port you specify will be forwarded to the service • There is no filtering, no routing, etc.This means you can send almost any kind of traffic to it, like HTTP , TCP , UDP or WebSocket's • Few limitations with LoadBalancer:  Every service exposed will gets it's own IP address  It gets very expensive to have external IP for each of the service(application) https://rancher.com/blog/2018/2018-06-08-load-balancing-user-apps-with-rancher/
  • 88. Kubernetes Services Load Balancer • On GoogleCloud,AWS, orAzure, a service type of LoadBalancer in the service manifest file will immediately run an Elastic /CloudLoad Balancer that assigns externally IP (public IP) to your application • But for on-prem or bare-metal k8s clusters,this functionality is not available • Using service type as LoadBalancer on bare-metal will not assign any external IP and service resource will remain in Pending state forever https://collabnix.com/3-node-kubernetes-cluster-on-bare-metal-system-in-5-minutes/ spec: type:LoadBalancer selector: app:hello ports: - port:80 targetPort:8080 protocol:TCP
  • 90. External IP 10.12.16.22 GCP Load Balancer Kubernetes GCP LoadBalancer www.flask-app.com DNS Server GoogleKubernetes Engine (GKE) Few limitations with LoadBalancer • Every service exposed will gets it's own public IP address • It gets very expensive to have public IP for each of the service www.flask-app.com
  • 91. Kubernetes GCP LoadBalancer Cons Kubernetes Cluster connected-city service connected-factoryservice connected-tools service connected-city pods connected-factory pods connected-tools pods www.connected-city.com www.connected-factory.com www.connected-tools.com External IP 10.12.16.22 GCP Load Balancer External IP 10.12.16.23 External IP 10.12.16.24 Public IP =
  • 92. Kubernetes LoadBalancer • Application can bereached using the external IP assigned by the LoadBalancer • TheLoadBalancer will forward the traffic to the available nodes in the cluster on the nodePort assigned to the service
  • 93. Kubernetes GCP LoadBalancer Cons Every serviceexposed will gets it's own IP address It gets very expensive to have externalIP for each of theservice(application)
  • 94. Kubernetes Cloud LoadBalancer: Cons • Every service exposedwillgets it's own IP address • It gets very expensiveto have external IP for each of the service(application) • Weseetwo LoadBalancers, each having its own IP . If we send a request to LoadBalancer 22.33.44.55 it gets redirectedto our internal service-nginx. If we send the request to 77.66.55.44 it gets redirected to our internal service-python. • This works great! But IP addresses arerareand LoadBalancer pricing depends on the cloud providers. Now imaginewe don’t have just two but many more internal services for which we would like to create LoadBalancers, costs would scale up. • Might there beanother solution which allows us to only use one LoadBalancer (with one IP) but still reach both of our internal services directly?.
  • 95. Kubernetes LoadBalancer Vs Ingress • PublicIPs aren’t cheap • ALB can only handlelimitedIPs • So SSL termination (Application Load Balancer) • Ingress acts as internal LoadBalancer • Routes traffic basedon URLpath • All applications will need only one publicIP
  • 96. Kubernetes https://metallb.universe.tf/ Services MetalLB Load Balancer • MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. • It allows you to create Kubernetes services of type “LoadBalancer” in bare- metal/on-prem clusters that don’t run on cloud providers likeAWS, GCP , Azureand DigitalOcean.
  • 97. Kubernetes Ingress Resource(rules) • With cloud LoadBalancers, we need to pay for each of the service that is exposed using LoadBalancer as the service type. As services grow in number , complexity to manage SSLs, Scaling,Auth etc., also increase • Ingress allows us to manage all of the above within the Kubernetes cluster with a definition file, that lives along with the rest of your application deployment files • Ingress controller can perform load balancing,Auth, SSL and URL/Path based routing configurations by being inside the cluster living as a Deployment or a DaemonSet • Ingress helps users access the application using a single externally accessible URL,that you can configure to route to different services within your cluster based on the URL path, at the same timeterminate SSL/TLS www.smartfactory.com www.smartcity .com service1 service2 https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
  • 98. Kubernetes Why SSL Termination atLoadBalancer? • SSL termination/offloading represents the end or termination point of an SSL connection • SSL termination at LoadBalancer decrypts and verifies data on the load balancer instead of the application server . Unencrypted traffic is sent between the load balancer and the backend servers • It is desired because decryption is resource and CPU intensive • Putting the decryption burden on the load balancer enables the server to spend processing power on application tasks, which helps improve performance • It also simplifies the management of SSL certificates
  • 99. Kubernetes Ingress Controller • Ingress resources cannot do anything on their own. We need to have an Ingress controller in order for the Ingress resources to work • Ingress controller implements rules defined by ingress resources • Ingress controllers doesn’t come with standard Kubernetes binary, they have to bedeployed separately • Kubernetes currently supports and maintains GCE and nginx ingress controllers • Other popular controllers include T raefik, HAProxy ingress, istio,Ambassador etc., • Ingress controllers are to be exposed outside the cluster using NodePort or with a Cloud Native LoadBalancer. • Ingress is the most useful if you want to expose multiple services under the same IP address • Ingress controller can perform load balancing,Auth, SSL and URL/Path based routing configurations by being inside the cluster living as a Deployment or a DaemonSet https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
  • 100. Kubernetes 30001 connected-city .com connected-factory .com others GCE LB NGINX TRAEFIK CONTOUR ISTIO Ingress Controller Kubernetes Cluster Ingress NodePort/ Cloud LB Ingress Controller connected-city service connected-factoryservice default service connected-city pods connected-factory pods default pods www.connected-city.com www.connected-factory.com www.connected-tools.com custom404 pages Ingress Rules
  • 101. Kubernetes Nginx Ingress Controller • Ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer • Officially maintained by Kubernetes community • Routes requests to services based on the request host or path, centralizing a number of services into a single entrypoint. Ex:www.mysite.com or www.mysite.com/stats kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller- 0.32.0/deploy/static/provider/baremetal/deploy.yaml https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#bare-metal Deploy Nginx Ingress Controller www.smartfactory.com www.smartcity .com service1 service2
  • 102. Kubernetes Ingress Rules apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name:ingress-rules spec: rules: - host: http: paths: -path: /nginx backend: serviceName: nginx-service servicePort: 80 -path: /flask backend: serviceName:flask-service servicePort: 80 apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name:ingress-rules annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: -host:nginx-app.com http: paths: - backend: serviceName:nginx-service servicePort: 80 -host:flask-app.com http: paths: - backend: serviceName:flask-service servicePort: 80 Path based routing Host based routing ingress-rules.yml Ingress-controller executestheseingress-rulesby comparing with thehttp requestedURL in thehttp header
  • 103. Kubernetes Demo: Ingress connected-factory .com connected-city .com • 3VMs K8s Cluster + 1 VM for Reverse Proxy • Deploy Ingress controller • Deploy pods • Deploy services • Deploy Ingress rules • Configure externalreverse proxy • Update DNS names • Access applications using URLs • connected-city.com • connected-factory.com
  • 104. Kubernetes Demo: Ingress Architecture DNS Server 10.11.3.5 HAProxy server192.168.0.101:30001 server192.168.0.102:30001 server192.168.0.103:30001 k8s cluster node IPs with ingress controller port HAProxy Configuration Kubernetes Cluster connected-city .com
  • 105. Kubernetes Demo: Ingress 1. Deploy Nginx Ingress Controller kubectlapply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller- 0.32.0/deploy/static/provider/baremetal/deploy .yaml 2. Deploy pods andservices apiVersion: v1 kind: Service metadata: name: connectedcity- service spec: ports: - port: 80 targetPort: 5000 selector: app: connectedcity apiVersion: apps/v1 kind: Deployment metadata: name: connectedcity- deployment spec: replicas: 3 selector: matchLabels: app: connectedcity template: metadata: labels: app: connectedcity spec: containers: Application-1 Deployment + ClusterIP service apiVersion: v1 kind: Service metadata: name: connectedfactory- service spec: ports: - port: 80 targetPort: 5000 selector: app: connectedfactory apiVersion: apps/v1 kind: Deployment metadata: name: connectedfactory- deployment spec: replicas: 3 selector: matchLabels: app: connectedfactory template: metadata: labels: app: connectedfactory spec: containers: - name: connectedfactory Application-2 Deployment + ClusterIP service kubectl apply – f<object>.yml
  • 106. Kubernetes apiVersion:networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-rules annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: -host: connected-city .com http: paths: - backend: serviceName:connectedcity-service servicePort:80 -host: connected-factory .com http: paths: - backend: serviceName:connectedfactory-service servicePort:80 Demo: Ingress 3. Deploy ingress rules manifestfile • Host basedrouting rules • Connectsto various services depending upon thehost parameter kubectl apply – f<object>.yml
  • 107. Kubernetes Demo: Ingress 4. Deploy HAProxyLoadBalancer • Provision a VM • Install HAProxy using package manager  apt install haproxy – y • RestartHAProxy serviceafter modifying theconfiguration  systemctl stop haproxy  add configuration to /etc/haproxy/haproxy .cfg  systemctl starthaproxy && systemctl enablehaproxy /etc/haproxy/haproxy .cfg 10.11.3.5
  • 108. Kubernetes C:WindowsSystem32driversetchosts 192.168.0.105 connected-city .com 192.168.0.105 connected-factory .com ipconfig /flushdns windows /etc/hosts 192.168.0.105 flask-app.com linux Demo: Ingress 5. Updatedummy DNS entries Both DNS namesto point to IP of HAProxy server connected-city.com connected-factory.com 10.11.3.5 HAProxy
  • 111. Kubernetes https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ https://devblogs.microsoft.com/premier-developer/bypassing-authentication-for-the-local-kubernetes-cluster-dashboard/ Dashboard • Defaultlogin access to dashboard is by using token or kubeconfig file.This can bebypassed for internal testing but not recommendedin production • Uses NodePort to exposethedashboardoutside theKubernetes cluster • Change the serviceto ClusterIPand use it in conjunction with ingress resources to make it accessiblethrough a DNS name(similarto previous demos) https://github.com/kunchalavikram1427/kubernet es/blob/master/dashboard/insecure-dashboard- nodeport.yaml
  • 112. Kubernetes Volumes • By default, container data is stored inside own its file system • Containers are ephemeral in nature. When they are destroyed, the data inside them gets deleted • Also when running multiple containers in a Pod it is often necessary to share files between those Containers • In order to persist data beyond the lifecycle of pod, Kubernetes provide volumes • Avolume can be thought of as a directory which is accessible to the containers in a pod • The medium backing a volume and its contents are determined by the volume type T ypes of Kubernetes Volumes • There are different types of volumes you can use in a Kubernetes pod:  Node-local memory (emptyDir and hostPath)  Cloud volumes (e.g., awsElasticBlockStore, gcePersistentDisk, and azureDiskVolume)  File-sharing volumes, such as Network File System (NFS)  Distributed-file systems (e.g., CephFS and GlusterFS)  Special volume types such as PersistentVolumeClaim, secret, configmap and gitRepo 10.244.0.22 Pod containers
  • 113. Kubernetes emptyDir • emptyDir volume is first createdwhen a Pod is assigned to a Node • It is initially empty and has same lifetimeof a pod • emptyDir volumes arestored on whatever medium is backing the node - that might bedisk or SSD or network storage or RAM • Containers in the Pod can all read and write the same files in the emptyDir volume • This volume can bemounted at the same or different paths in each Container • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever • Mainly used to store cache or temporary data to beprocessed 10.244.0.22 container1 container2 emptyDir
  • 114. Kubernetes apiVersion: v1 kind: Pod metadata: name:emptydir-pod labels: app:busybox purpose: emptydir-demo spec: volumes: -name:cache-volume emptyDir: {} containers: -name:container-1 image:busybox command: ["/bin/sh","-c"] args: ["date >> /cache/date.txt;sleep1000"] volumeMounts: -mountPath: /cache name:cache-volume - name:container-2 image: busybox command: ["/bin/sh","-c"] args: ["cat /cache/date.txt;sleep 1000"] volumeMounts: -mountPath: /cache name:cache-volume emptyDir kubectl apply -f emptyDir-demo.yml kubectlexec-it pod/emptydir-pod -c container-2 -- cat/cache/date.txt kubectllogs pod/emptydir-pod -c container-2
  • 115. Kubernetes hostPath • This type of volume mounts a file or directory from the host node’s filesystem into your pod • hostPath directory refers to directory created on Node where pod is running • Use it with caution because when pods are scheduled on multiple nodes, each nodes get its own hostPath storage volume. These may not bein sync with each other and different pods might beusing a different data • Let’s say the pod with hostPath configuration is deployed on Worker node 2.Then host refers to worker node2. So any hostPath location mentioned in manifest file refers to worker node 2 only • When node becomes unstable, the pods might fail to access the hostPath directory and eventually gets terminated container2 /data 10.244.0.22 container1 https://kubernetes.io/docs/concepts/storage/volumes/#hostpath Kubernetes Node Kubernetescluster
  • 116. Kubernetes apiVersion:v1 kind: Pod metadata: name:hostpath-pod spec: volumes: - name:hostpath-volume hostPath: path:/data type:DirectoryOrCreate containers: - name:container-1 image:busybox command:["/bin/sh","-c"] args: ["ls /cache;sleep1000"] volumeMounts: - mountPath:/cache name:hostpath-volume hostPath kubectl apply -f hostPath-demo.yml kubectllogs pod/hostpath-pod -c container-1 kubectlexec-it pod/hostpath-pod -c container-1 -- ls /cache
  • 117. Kubernetes Persistent Volume and Persistent Volume Claim • Managing storage is a distinct problem inside a cluster . You cannot rely on emptyDir or hostPath for persistent data. • Also providing a cloud volume like EBS, AzureDisk often tends to be complex because of complex configuration options to be followed for each service provider • T oovercome this, PersistentVolume subsystem provides anAPI for users and administrators that abstracts details of how storage is provided from how it is consumed. T odo this, K8s offers twoAPI resources: PersistentVolume and PersistentVolumeClaim.
  • 118. Kubernetes Persistent Volume and Persistent Volume Claim Persistentvolume (PV) • APersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes(pre-defined provisioners and parameters to create a Persistent Volume) • Admin creates a pool of PVs for the users to choose from • It is a cluster-wide resource used to store/persist data beyond the lifetime of a pod • PV is not backed by locally-attached storage on a worker node but by networked storage system such as Cloud providers storage or NFS or a distributed filesystem like Ceph or GlusterFS • Persistent Volumes provide a file system that can be mounted to the cluster , without being associated with any particular node
  • 119. Kubernetes https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html Persistent Volume and Persistent Volume Claim Persistent Volume Claim (PVC) • In orderto use a PV, user needto first claim it using a PVC • PVC requests a PV with the desired specification(size, speed,etc.) from Kubernetesand then binds it to a resource(pod, deployment…) as a volume mount • User doesn’t need to know the underlying provisioning.Theclaims must becreatedin the same namespace wherethe pod is created.
  • 120. Kubernetes Persistent Volume and Persistent Volume Claim https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
  • 122. Kubernetes Using PVCs in GKE https://medium.com/google-cloud/introduction-to-docker-and-kubernetes-on-gcp-with-hands-on-configuration-part-3-kubernetes-with-eb41f5fc18ae
  • 123. Kubernetes Logs kubectllogs my-pod kubectllogs -l name=myLabel kubectllogs my-pod -c my-container kubectllogs -l name=myLabel -c my-container kubectllogs -f my-pod kubectllogs -f my-pod -c my-container kubectllogs -f -l name=myLabel --all-containers kubectllogs my-pod -f --tail=1 kubectllogs deploy/<deployment-name> # dump pod logs (stdout) # dump pod logs, with labelname=myLabel(stdout) # dump pod containerlogs (stdout, multi-containercase) # dump pod logs, with labelname=myLabel (stdout) # streampod logs (stdout) # streampodcontainer logs (stdout, multi-containercase) # stream all pods logs with labelname=myLabel (stdout) # streamlast line of pod logs # dump deployment logs
  • 124. Kubernetes Interaction with pods kubectlrun -i --tty busybox--image=busybox -- sh kubectlrun nginx –it --image=nginx -- bash kubectlrun nginx --image=nginx --dry-run -o yaml > pod.yaml kubectlattachmy-pod -i kubectlexecmy-pod -- ls / kubectlexecmy-pod -c my-container-- ls / # Run pod as interactive shell # Run pod nginx # Run pod nginx and writeits specinto a filecalled pod.yaml # Attach to Running Container # Run command in existing pod (1 container case) # Run command in existing pod (multi-container case)
  • 126. Kubernetes Scheduling • Kubernetes users normally don’t need to choose a node to which their Pods should bescheduled • Instead, the selection of the appropriate node(s) is automatically handled by the Kubernetes scheduler • Automatic node selection prevents users from selecting unhealthy nodes or nodes with a shortage of resources • However , sometimes manual scheduling is needed to ensure that certain pods only scheduled on nodes with specialized hardware like SSD storages, or to co-locate services that communicate frequently(availability zones), or to dedicate a set of nodes to a particular set of users • Kubernetes offers several ways to manual schedule the pods. In all the cases, the recommended approach is to use label selectors to make the selection • Manual scheduling options include: 1. nodeName 2. nodeSelector 3. Node affinity 4. T aints and T olerations https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
  • 127. • nodeName • nodeName is a field of PodSpec • nodeName is the simplest form of node selection constraint, but due to its limitations it is typically not used • When scheduler finds no nodeName property , it automatically adds this and assigns the pod to any available node • Manually assign a pod to a node by writing the nodeName property with the desired node name. • We can also schedule pods on Master by this method • Some of the limitations of using nodeName to select nodes are:  If the named node does not exist, the pod will not be run, and in some cases may be automatically deleted  If the named node does not have the resources to accommodate the pod, the pod will fail and its reason will indicate why, for example OutOfmemory or OutOfcpu  Node names in cloud environments are not always predictable or stable Kubernetes Scheduling
  • 128. Kubernetes Scheduling node Name apiVersion:v1 kind:Pod metadata: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 nodeName: k8s-master nodeName.yml kubectl apply – f nodeName.yml
  • 129. Kubernetes Scheduling node Selector • nodeSelector is a field of PodSpec • It is the simplest recommended form of node selection constraint • It uses labels(key-value pairs) to select matching nodes onto which pods can bescheduled • Disadvantage with nodeSelector is it uses hard preferences i.e.,if matching nodes are not available pods remain in pending state! check default node labels kubectl describe node <node-name>
  • 130. Kubernetes Scheduling node Selector Add labels to nodes kubectl label nodes <node-name> <label-key>=<label-value> kubectl label nodes k8s-slave01 environment=dev deletea label: kubectllabel node <nodename> <labelname> -
  • 131. Kubernetes Scheduling node Selector apiVersion:v1 kind:Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx nodeSelector: environment: dev nodeSelector .yml kubectl apply – f nodeSelector .yml kubectl get pods – o wide --show-labels kubectl describe pod <pod-name>
  • 132. Kubernetes Scheduling node Affinity Node affinityis specifiedas fieldnodeAffinityin PodSpec • Node affinity is conceptually similar to nodeSelector – itallows youtomanually schedule pods basedon labels on thenode.But ithas fewkeyenhancements: • nodeAffinityimplementationis more expressive.Thelanguage offers more matching rules besides exactmatches createdwith a logicalAND operationin nodeSelector • Rules are soft preferencesrather than hard requirements, so ifthescheduler can’tfinda node with matching labels, thepodwillstill bescheduled on other nodes Therearecurrently two types ofnode affinityrules: 1. requiredDuringSchedulingIgnoredDuringExecution: Hard requirementlikenodeSelector .No matching nodelabel,no podscheduling! 2. preferredDuringSchedulingIgnoredDuringExecution: Soft requirement. No matching node label,podgets scheduled on other nodes! TheIgnoredDuringExecution partindicates that if labels on a nodechange atruntimesuch that the affinityrules on a podareno longer met,thepodwill still continue torun on thenode.
  • 133. Kubernetes Scheduling node Affinity apiVersion:v1 kind: Pod metadata: name:with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorT erms: - matchExpressions: - key:environment operator:In values: - prod containers: - name:nginx-container image:nginx nodeAffinity .yml kubectl apply – f nodeAffinity .yml • Pod gets scheduled on the node has the label environment=production • If noneofthe nodes has this label, pod remains in pending state. • T oavoid this, use affinity preferredDuringSchedulingIgnoredDuringExecu tion
  • 134. Kubernetes Scheduling T aints and T olerations • Node affinity , is a property of Pods that attracts them to a set of nodes (either as a preferenceor a hard requirement) • Taints are the opposite – they allow a node to repel a set of pods. • T aints areapplied to nodes(lock) • T olerations areappliedto pods(keys) • In short, pod should tolerate node’s taint in order to run in it. It’s like having a correct key with pod to unlock the node to enter it • Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes Pod K8s Node
  • 135. Kubernetes Scheduling T aints and T olerations • By default, Master node is tainted. So you cannot deploy any pods on Master • T o check taints applied on any node use kubectl describe node <node-name> Apply taint to nodes kubectl taint nodes <node-name> key=value:<taint-effect> • taint’s key and value can be any arbitrary string • taint effectshould be one of the supported taint effects such as 1. NoSchedule: no pod will be able to schedule onto node unless it has a matching toleration. 2. PreferNoSchedule: soft version of NoSchedule. The system will try to avoid placing a pod that does not tolerate the taint on the node, but it is not required 3. NoExecute: node controller will immediately evict all Pods without the matching toleration from the node, and new pods will not be scheduled onto the node
  • 136. Kubernetes Scheduling T aints and T olerations Apply taint to nodes kubectl taint nodes k8s-slave01 env=stag:NoSchedule • In the above case, node k8s-slave01 is tained with label env=stag and taint effectas NoSchedule. Only pods that matches this taint will be scheduled onto this node Check taints on nodes kubectl describenode k8s-slave01 |grep -i taint
  • 137. Kubernetes Scheduling T aints and T olerations Apply tolerations to pods kubectl apply – f taint_toleration.yml kubectl get pods – o wide apiVersion: apps/v1 kind: Deployment metadata: name:nginx-deployment spec: replicas: 3 selector: matchLabels: app:myapp template: metadata: name:myapp-pod labels: app:myapp spec: containers: -name:nginx-container image:nginx tolerations: -key:"env" operator: "Equal" value: "stag" effect: "NoSchedule“ taint_toleration.yml • Here pods are scheduled onto both the slave nodes. • Only slave01 is taintedhere and matchingtolerations are added to pods. So pods are scheduled onto slave-01 as well. • If we removethe tolerations from the pods and deploy them, they will get scheduled onto slave-02 only as slave01 is tainted and matching toleration is removed/notavailable with pods!
  • 138. References • • • • • • • Docker Installationon Ubuntu https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04 K3s Installation https://k33g.gitlab.io/articles/2020-02-21-K3S-01-CLUSTER.html https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c Kubernetes 101 https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16 https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/ Kubeadm https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ k3s https://rancher.com/docs/ Kubectl commands https://kubernetes.io/docs/reference/kubectl/cheatsheet/ https://kubernetes.io/docs/reference/kubectl/overview/ Deployments https://www.bmc.com/blogs/kubernetes-deployment/ https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ • Services https://kubernetes.io/docs/concepts/services-networking/service/#headless-services https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other https://theithollow.com/2019/02/05/kubernetes-service-publishing/ https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/ https://medium.com/@JockDaRock/metalloadbalancer-kubernetes-on-prem-baremetal-loadbalancing-101455c3ed48 https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e • Ingress https://www.youtube.com/watch?v=QUfn0EDMmtY&list=PLVSHGLlFuAh89j0mcWZnVhfYgvMmGI0lF&index=18&t=0s • K8s Dashboard https://github.com/kubernetes/dashboard https://github.com/indeedeng/k8dash • YAML https://kubeyaml.com/