Containerization Cloud Computing Based Inspiration Technology For Adoption Through Docker and Kubernetes
Containerization Cloud Computing Based Inspiration Technology For Adoption Through Docker and Kubernetes
Abstract-- The field of cloud computing has been evolving virtualization software and this virtualization software run on
rapidly since inception. Cloud is the virtual pool of resources host operating system which slowdowns the performance.
which can be served to the user through S aaS , PaaS and IaaS Each VMs created on top of the virtualization software have
flavours. No cloud can be existed without virtualization. With guest OS. User wants to run the application but as VM is
Virtual Machine, the bare metal is virtualized to run multiple equipped with OS and system’s programs and files, these
Operating S ystem instances. These VMs serve to the users for components consume space and computing power.
performing the tasks. All are independent units and user has
complete ownership and control to install the required One of the common problems is faced by the developers
softwares and use as per the wish. The VM solves many and operational team that the code is working successfully
problems by optimizing the resources. The developers concern on a machine but doesn't work on other one due to the
that the code is working fine on the development environment, differences in computing environment, if any.
but fail to work on testing or production environment due to
the environment differences, if any. As per current technology trend, the Microservices are
S o, the Containerization comes into the picture to address gaining the popularity among the developers. Microservices
such challenges. In this paper, various aspects of address the issues associated with monolithic applications.
Containerization are explored and highlighted. The Container
runtime environment- Docker and Container orchestration [4] Microservices are independent and self- sufficient
tool- Kubernetes are focused and deployed for exploring the components that isolate fine grained business capabilities.
possibilities of Containerization adoption, which automate the Microservices are based on Microservices Architecture
Container deployment, scaling and load balancing. which is an approach to design software applications as
Keywords-- Cloud Computing, Containerization, Docker,
suites of independently deployable services. Many
Kubernetes, Microservices, Pod, Rancher.
applications are easier to create and maintain if they are
converted into smaller components which work in
I. INTRODUCTION conjunction. Each component is developed independently,
and the application is then just the integration of these
[1] Cloud Computing makes a virtual pool of resources
individual components. The updations of application
which serves the user's requirement at reasonable cost. It component are easy to handle. Also, the failure of one
manages the pool of resources automatically and Microservice will not affect other Microservices of the
dynamically and provides Software as a Service (SaaS), system because of its loosely coupled nature. Microservice
Platform as a Service (PaaS) and Infrastructure as a Service got a lot of popularity because of its advantages.
(IaaS) through Public, Private and Hybrid Cloud models. It
serves faster, simpler and cheaper services with high The Virtual Machine or Virtualization is not the suitable
availability and scalability in elastic manner. In short, cloud option to handle these challenges. To address such issues,
serves ‘Everything as a Service’. the need of Containerization arises. Containerization
facilitates to deploy various applications utilizing the same
[2] To harness the Cloud Computing benefits,
OS on a single VM/ Server. A Container uses operating
Government of India already launched Meghraj Cloud. This system level virtualization for deploying applications instead
not only makes certain the optimal infrastructure utilization of creating an entire VM.
but also expedite the deployment of e-governance
applications. [5] Containerization is a technology to virtualize the
applications in a way that results the significant enhancement
[3] Cloud environment uses virtualization technology to
in cloud applications management.
migrate physical environment into virtual environment
which reduces the overheads to maintain the hardware. [6] The Containers enveloped an application and the
Virtualization can be achieved by creating Virtual Machines associated dependencies inside its own environment. It
(VMs). VM is the heavy weight resource and run top of the permits them to execute in isolated way while utilizing the
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
same resources including the Operating System. It enables a The microservices architecture enables the independent
rapid, lightweight application deployment because the deployable services to take over and addresses the issues
resources are utilized in feasible manner by not wasting on associated with the monolithic architecture. Microservices
running separate operating system. architecture makes the application loosely coupled by
breaking a software application into independent deployable
[7] The applications are enveloped with all the required
services.
dependencies into a standardized format called as a
Container. These Containers carry on running in an isolated To address the issues and to take the advantages of
manner on top of the host OS. Now several Microservices microservices, the Containerization plays a vital role. The
can run in the same VM by running various Containers for applications are enveloped with all related dependencies into
each Microservice. In this way, it supports the Microservices a format named as a Container. The developers take its
architecture as well as resolves the issue of differences in advantages as the application is now tested, run and
computing environment on VM/ Servers. The application deployed without worrying about environment differences.
will always work same way without any difference that on Each service is deployed in separate Container to make it
which Platform/ OS (Data Center, Cloud, Windows and loosely coupled and independent deployable units.
Linux Distros) it runs. Containerization is in vogue with the
rise of Microservices and Docker. There are many Container Runtimes but Docker is the
most popular and open source tool, and as a Container
There are various Containerization tools available to Orchestration Platform Kubernetes is freely available and
build, deploy and run the applications. Some Container trending. Nowadays many IT giants adopting these latest
runtime environments are Docker, Containerd, CRI-O etc. trends and technologies, also many businesses are adopting
this Containerization technology due to its easy deployment,
[8] The Docker is the leading Container platform that scaling, and operations.
packages code and dependencies together and ships them as
a Container Image. Docker is an open source tool which also Here, the objective is to discuss the aspects of
provides online repository of Docker Images called as Containerization using Docker and Kubernetes and the focus
Docker Hub. Docker Containers are simply runtime is to deploy, run and test the Docker and Kubernetes in order
instances of Docker Images. to explore them and gaining the experience by deploying the
applications..
The complex requirements for a Microservice are written
in easy to write DockerFile. The developer writes the codes III. DOCKER
that define the application requirement with its dependencies
in a DockerFile. This DockerFile produces an Image. All the Docker [10] works as Client- Server based model. The
dependencies which are required for the application are Docker client sends the request to Docker server/daemon.
present in the Image. This Image can be uploaded in Docker The Docker client and server both can be run on same or
Hub. Docker hub is Git repository, which is distributed separate machine.
Version Control System, for Docker images. Anybody can Docker makes it easier to create, deploy and run
pull the required Image and build a Container. There is no applications by using Containers. It simplifies the
need to pre-allocate any RAM in the Containers. configuration, manage code pipeline, increase productivity,
The Containers could not communicate with each other provide application isolation and mobility etc.
so it is needed to deploy and manage them appropriately. A. Docker Images
They cannot be auto scaled to handle the workloads and
The Docker Image can be created using DockerFile. It
distribution of traffic is also challenging with Containers.
contains a list of instructions. Simple Docker build command
So, to handle such issues, [9] Kubernetes comes into the is sufficient to create an image file. All instructions written
picture. It is an open source Container orchestration engine in the DockerFile are followed to build an Image.
which automates the Container deployment, Container
B. DockerRepository
scaling and Container load balancing. The basic deployment
unit in Kubernetes is Pod. Pod is the basic unit in Docker images are placed in online repository of images.
Kubernetesas Container is the basic unit in Docker. Each Images can be pushed and pulled through simple Docker
Pod contains mainly single Container. push and Docker pull command respectively. Docker Hub is
the repository where anybody can push or pull the images
II. MOTIVATION AND OBJECTIVE without building an Image afresh.
Containerization can replace the tradition of virtual C. DockerContainer
machines and monolithic approaches. The virtual machine is
not optimal platform for resource utilization. A Docker Image is utilized to create a Docker Container.
Containers contain all dependencies and instructions for an
The VMs require separate OS and other resources to application to run in isolation. The Docker run command is
deploy and run the applications. The developers are also available to execute the Docker Image to create a Container.
dependent on VM's environment to test, run and deploy the
application and waste their time to sort out the issues of IV. KUBERNETES (K8S)
environment differences.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
It is an open source project to automate deployment, When a Pod fails/ dies then Containers of that Pod are
scale, and to manage the Containerized applications [11]. shifted to new Pod for uninterrupted execution. Pod is the
Without the Container orchestration the human cost of basic scheduling unit and has unique IP address but Pods are
running services was high, increased complexity of running ephemeral so if Pod fails, new Pod gets a fresh IP address so
something new in production, scaling was difficult, manual how to expose application to the outside world, how various
service setup and manual node crash fixing was required. components connect/ communicate and how to resolve Pod
Kubernetes (K8S) provides various significant features IP changes. To address these situations, Service takes the
which allow running immutable infrastructure. charge.
The key K8S features are as below- A service in Kubernetes is an abstraction that represents
a logical set of pods and a policy through which they are
Horizontal scaling accessed. The services correspond to a set of pods using
Automated rollouts and rollbacks labels and selectors. ClusterIP, NodePort and LoadBalancer
Service discovery and load balancing are the main types of Services .
Storage orchestration
ClusterIP exposes on an internal IP in the cluster.
Secret and configuration management
This is default Service and not accessible from
Self-healing
outside the cluster.
Batch execution
NodePort helps to expose the applications outside
Automatic bin packing
the world. It makes a Service accessible from
outside the cluster.
LoadBalancer provisions an external load balancer
in current cloud and allocate a fixed, external IP to
the Service.
B. Kubernetes Components
The cluster is composed of a master node, which helps in
exposing the API, scheduling the deployment, and typically
managing the cluster. Multiple worker nodes may be
accountable for Container runtime (Docker, rkt etc.), plus an
agent that talk to the master.
C. Master Node Components
Kube-apiserver responsible for exposing the API.
Etcd. Key value stores all the cluster data.
Kube-scheduler schedules new Pods on worker
nodes.
Kube-controller-manager runs the controllers.
Cloud-controller-manager talks to cloud providers.
Fig. 1. Kubernetes Features D. Worker Node Components
Kubelet agent ensures that Containers in a Pod are
A. Pods running. Kube-proxy keeps network rules and performs
forwarding. Container runtime runs the Containers.
As the atomic unit of scheduling in Virtualization is VM
and Container in Docker environment same way the atomic E. Use Case of Kubernetes (Containerization)
unit Pod is used in Kubernetes. [12] Secure, Scalable and Sugamya Website as a Service
Single VM can run multiple Pods and each Pod can (S3WaaS) is a GIGW based website generating framework
contain one or more Container. which is based on SaaS model hosted on National Cloud of
National Informatics Centre (NIC). It is built on Software
Pod has unique IP address and each Container inside the Defind Infrastructure for smooth provisioning hosting,
Pod run on separate port numbers. There are two types of compute, storage and networking.
communication: intra Pod and inter Pod communication. The
Container inside the Pod can communicate through the same It leverages technology for generating secure websites
network namespace which means all the Containers inside which are highly customizable and seamlessly deployed on a
the same Pod share the same IP address but different port scalable and completely software defined infrastructure. All
number while in the case of inter Pod communication, each the district administration and many other governemnt
Pod has separate IP address and this unique Pod IP establish bodies have launched the S3WaaS based websites, which are
the inter Pod communication. utilizing the benefits of cutting edge Containerization
technologies like Docker, Kubernetes etc. and other latest
trends.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
The infrastructure procurement/ alocation and its It helps in monitoring the host resources and
management is not the matter of concern for user, the user managing deployment of the Container.
focuses entirely on content quality. The website is developed Rancher is designed for Multi-tenancy and user
on S3WaaS framework using CMS like WordPress provided management support.
through S3WaaS interface. User need to focus on website
It makes it easy to upgrade existing Container
development instead of resource management. It is bu ilt on
Containers using popular open source Container runtime- services, by allowing service cloning and
Docker and Container orchestration- Kubernetes tools. redirection of service requests.
Rancher supports Docker Machine, it monitors
Some important points related to the Infrastructure are host resources and manages Container deployment
mentioned as- which makes tasks more easier.
Built on Containers using open source tools VI. STEPS DURING DEPLOYM ENT SETUP
(Docker, Container, WordPress etc.)
Agile and Scalable with Auto Healing and Auto A. Open Source Cutting Edge Technology Used
Scaling. Base OS- CentOS-8
Seamless deployment, scaling and monitoring of Docker version: 19.03.13
each generated website.
Static content published on an software defined Kubeadm /Kubectl Version: v1.19.2
Object Storage with replication. Rancher v2.5.3
Automatic configuring of each website on software
defined Load Balancer. B. Hands on to setupKubernetes Cluster with Docker
Automatic updates for security patches. Runtime Environment.
Dashboard for real time log monitoring and The 04 nodes cluster with a master node has been set up
website analytics for insights into website usage. as mentioned in Fig. 2. The master node works as manager
node, while rest nodes work as worker node. The Workload
V. RANCHER is created on worker nodes only and the master node is
Rancher [13] is a multi-cluster orchestration platform responsible for managing the cluster. We follow the below
which sorts out the operational as well as security challenges mentioned steps to setup 04 Node Cluster
to manage multiple Kubernetes clusters across any
infrastructure, while making available DevOps teams with
integrated tools for running Containerized workloads. It
facilitates for delivering the Kubernetes-as-a-Service.
Rancher quickly deploys Kubernetes clusters anywhere
on any provider and also unites these deployed clusters
under centralized authentication and also the access control.
Because it is agnostic about where the resources run, you can
easily bring a cluster to a different provider and transfer
resources between them. Instead of having multiple
independent Kubernetes deployments, Rancher integrates
them as a single, managed Kubernetes cloud.
Kubernetes is a powerful engine to orchestrate
Fig. 2. 04 nodes cluster with a master
Containers. Rancher incorporates a full Kubernetes
distribution, but adds value around Kubernetes in three Step 1.
crucial fields as Cluster Operations and Management, IP hostname mapping in /etc/hosts file on all (01+04)
Intuitive Workload Management and Enterprise Support. nodes.
Following are some features of the Rancher [14]. Step 2.
It supports the multi orchestration engines like systemctl stop/disable firewalld
Kubernetes, Cattle or Docker Swarm. Step 3.
It is useful to create a private SDN for each Disable Swap to avoid some performance issue.
environment which enables secure communication.
swapoff –a
It distributes the traffic between Containers or
services through Container load balancing. Step 4.
It supports orchestrating the persistent storage Disable SELinux for subsequent reboots so that
services for Docker. Kubernetes continues to run correctly
It is useful to implement distributed DNS based setenforce 0 (temporarily)
service discovery. vi /etc/selinux/config (permanent)
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
Step 8.
Add Kubernetes Repo as the kubeadm, kubelet, kubectl
packages not available with default OS Repository. So, it
needs to create a repo file inside repos directory.
This repo file consists of location where related packages
are available.
Run following on linux terminal.
Fig. 3. T he 04 node cluster with a master node
cat <<EOF> /etc/yum.repos.d/Kubernetes.repo
[Kubernetes] Later on, the same is verified through Kubernetes
name=Kubernetes Dashboard utility. The dashboard screenshot of the created
baseurl=https://packages.cloud.google.com/yum/repos/Kube nodes is shown in the Fig. 4. The Kubernetes Dashboard is a
rnetes-e17-x86_64 web interface and the applications can also be deployed in
enabled=1 the cluster, troubleshooting of the Containerized applications
gpgcheck=1 can be done and they can be easily managed. Dashboard
provides an overview of the applications in the cluster. The
repo_gpgcheck=1
Kubernetes resources can be easily created and modified
gpgkey=https://packages.cloud.google.com/yum/doc/yum-
with the help of the dashboard. It makes the task easier and
key.gpghttps://packages.cloud.google.com/yum/doc/rpm- quicker to manage the Kubernetes cluster.
package-key.gpg
EOF The Kubernetes Dashboard may be deployed by running
following command on Master Node:
Step 9. kubectl apply -f
Kubernetes packages installation https://raw.githubusercontent.com/kubernetes/dashboard/v2.
yum install -y kubeadm 0.0/aio/deploy/recommended.yaml
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
and is used to store and transport data. The deployment is workload can be managed for DevOps and to maintain
quicker through dashboard, user needs to just provide few corporate level security. By running the following command,
details to launch the application and manage and multi- cluster orchestration platform Rancher can be
troubleshoot it through the dashboard in a simpler way. launched to handle the ground to manage our cluster.
After that, Rancher unified cluster manager can also be [16] docker run -d --restart=unless-stopped -p 80:80 -p
deployed in order to manage the clusters. The complete 443:443 --privileged rancher/rancher:latest
Then the cluster can be added/ imported in Rancher created resources like Namespaces, Deployments, Services,
orchestration manager. Fig. 5 shows the Rancher Dashboard DaemonSets, Ingresses, Jobs and StatefulSet etc. of our
of our added cluster which provides overview of the overall cluster.
infrastructure and workloads etc. The figure shows the
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Electronics and Sustainable Communication Systems (ICESC-2021)
IEEE Xplore Part Number: CFP21V66-ART; ISBN: 978-1-6654-2867-5
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR. Downloaded on April 12,2022 at 17:56:17 UTC from IEEE Xplore. Restrictions apply.