Kubernet
Kubernet
Kubernet
Kubernetes Terminology,
Kubernetes Features ,Kubernetes Pods, Kubernetes Cluster Model
,
Kubernetes
• Kubernetes in an open source container management
tool hosted by Cloud Native Computing Foundation
(CNCF). This is also known as the enhanced version of
Borg which was developed at Google to manage both
long running processes and batch jobs, which was
earlier handled by separate systems. Kubernetes
comes with a capability of automating deployment,
scaling of application, and operations of application
containers across clusters. It is capable of creating
container centric infrastructure.
Following are some of the important
features of Kubernetes
• Scheduler – Tracks the status of worker nodes and determines where to run
Pods. Kube-scheduler is intended to be accessible only from within the control
plane.
• Kubernetes worker nodes are physical or virtual
machines dedicated to running containerized
applications for the cluster. In addition to running a
container engine, worker nodes host the following two
services that allow orchestration from the control plane:
• Kubelet – Runs on each worker node to orchestrate and
verify Pod execution.
• Kube-proxy – A network proxy that uses the host’s
packet filtering capability to ensure correct packet
routing in the Kubernetes cluster.
• Clusters are commonly hosted using a CSP Kubernetes
service or an on-premises Kubernetes service; CSPs
often provide additional features. They administer
most aspects of managed Kubernetes services;
however, organizations may need to handle some
Kubernetes service aspects, such as authentication
and authorization, because default CSP configurations
are typically not secure. When designing a Kubernetes
environment, organizations should understand their
responsibilities in securely maintaining the cluster.
Kubernetes Pods
• with kubernetes our ultimate aim is to deploy our
application in the form of containers on a set of
machines that are configured as worker nodes in
a cluster. However, kubernetes does not deploy
containers directly on the worker nodes. The
containers are encapsulated into a Kubernetes
object known as PODs. A POD is a single instance
of an application. A POD is the smallest object,
that you can create in kubernetes
• Here we see the simplest of simplest cases were you have a
single node kubernetes cluster with a single instance of your
application running in a single docker container encapsulated
in a POD. What if the number of users accessing your
application increase and you need to scale your application?
You need to add additional instances of your web application
to share the load. Now, were would you spin up additional
instances? Do we bring up a new container instance within
the same POD? No! We create a new POD altogether with a
new instance of the same application. As you can see we now
have two instances of our web application running on two
separate PODs on the same kubernetes system or node.
• What if the user base FURTHER increases and your current
node has no sufficient capacity? Well THEN you can always
deploy additional PODs on a new node in the cluster. You will
have a new node added to the cluster to expand the cluster’s
physical capacity. SO, what I am trying to illustrate in this slide
is that, PODs usually have a one-to-one relationship with
containers running your application. To scale UP you create
new PODs and to scale down you delete PODs. You do not
add additional containers to an existing POD to scale your
application. Also, if you are wondering how we implement all
of this and how we achieve load balancing between
containers etc, we will get into all of that in a later lecture.
What is a Kubernetes cluster model?