Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit Iii Virtualization Infrastructure and Docker Desktop Virtualization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

UNIT III

VIRTUALIZATION INFRASTRUCTURE AND DOCKER

Desktop Virtualization

Desktop virtualization is the concept of isolating a logical operating system (OS)


instance from the client that is used to access it.

There are several different conceptual models of desktop virtualization, which can
broadly be divided into two categories based on whether the technology executes
the OS instance locally or remotely. It is important to note that not all forms of
desktop virtualization technology involve the use of virtual machines (VMs)

Desktop virtualization deployment types

 Virtual desktop infrastructure (VDI),


 Remote Desktop Services (RDS) -- formerly, Terminal Services
 Desktop as a service (DaaS).

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 1


Types of desktop virtualization technologies

1. Host-based VM. Each user connects to an individual VM that a data center


hosts. The user may connect to the same persistent desktop every time or
access a fresh no persistent desktop with each login.
2. Shared host. Users connect to a shared desktop that runs on a server. RDS
takes this client-server approach. Users may also connect to individual
applications running on a server; this technology is an example of
application virtualization.
3. Host-based physical machine. The OS runs directly on another device's
physical hardware.

Benefits and drawbacks of desktop virtualization

One of the primary advantages of desktop virtualization is that virtualization often


makes it easier for IT professionals to manage the desktop environment. Rather
than maintaining countless physical desktops, administrators can focus their
attention on a small number of desktop images that they deploy to the users.

Conversely, there are some circumstances in which the use of desktop


virtualization can increase an organization's management burden and its licensing
costs. For example, if an organization chooses to allow users to connect to virtual
desktops from their physical desktops, then the IT staff will have to license and
maintain both the physical and virtual desktops.

Another advantage to desktop virtualization is that users can access their virtual
desktops from anywhere. Even if a user is working from home or a hotel room,
they can still work from the same desktop environment that they use in the office.

A potential disadvantage, however, is that virtual desktops cannot function without


connectivity to the VDI environment. As such, an internet connectivity failure or a
server hardware failure could make an organization's virtual desktops inaccessible
to users.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 2


Network Virtualization

Network Virtualization is a process of logically grouping physical networks and


making them operate as single or multiple independent networks called Virtual
Networks.

General Architecture Of Network Virtualization

Tools for Network Virtualization :

1. Physical switch OS

It is where the OS must have the functionality of network virtualization.

2. Hypervisor

It is which uses third-party software or built-in networking and the functionalities


of network virtualization.

The basic functionality of the OS is to give the application or the executing process
with a simple set of instructions. System calls that are generated by the OS and
executed through the library are comparable to the service primitives given at the
interface between the application and the network through the SAP (Service
Access Point).

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 3


Functions of Network Virtualization:

• It enables the functional grouping of nodes in a virtual network.

• It enables the virtual network to share network resources.

• It allows communication between nodes in a virtual network without routing


of frames.

• It restricts management traffic.

• It enforces routing for communication between virtual networks.

Network Virtualization in Virtual Data Center:

1. Physical Network

• Physical components: Network adapters, switches, bridges, repeaters, routers


and hubs.

• Grants connectivity among physical servers running a hypervisor, between


physical servers and storage systems and between physical servers and clients.

2. VM Network

• Consists of virtual switches.

• Provides connectivity to hypervisor kernel.

• Connects to the physical network.

• Resides inside the physical server.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 4


Advantages of Network Virtualization:

 Improves manageability
 Reduces CAPEX
 Improves utilization
 Enhances performance
 Enhances security

Disadvantages of Network Virtualization:

 It needs to manage IT in the abstract.


 It needs to coexist with physical devices in a cloud-integrated hybrid
environment.
 Increased complexity.
 Upfront cost.
 Possible learning curve.

Storage Virtualization

Storage virtualization in Cloud Computing is nothing but the sharing of physical


storage into multiple storage devices which further appears to be a single storage
device. It can be also called as a group of an available storage device which simply
manages from a central console.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 5


• Hardware Assisted Virtualization

• Kernel Level Virtualization

• Hypervisor Virtualization

• Para-Virtualization

• Full Virtualization

i. Hardware Assisted Virtualization

This type of virtualization requires hardware support. It is similar to full Para-


virtualization. Here, the unmodified OS can run as hardware support for
virtualization and we can also use to handle hardware access requests and protect
operations.

ii. Kernel Level Virtualization

It runs a separate version of the Linux Kernel. Kernel level allows running multiple
servers in a single host. It uses a device driver to communicate between main
Linux Kernel and the virtual machine. This virtualization is a special form of
Server Virtualization.

iii. Hypervisor Virtualization

A hypervisor is a layer between the Operating system and hardware. With the help
of hypervisor multiple operating systems can work. Moreover, it provides features
and necessary services which help OS to work properly.

iv. Para-Virtualization

It is based on hypervisor which handles emulation and trapping of software. Here,


the guest operating system is modified before installing it to any further machine.
The modified system communicates directly with the hypervisor and improves the
performance.

v. Full Virtualization

This virtualization is similar to Para-Virtualization. In this, the hypervisor traps the


machine operations which is used by the operating system to perform the

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 6


operations. After trapping the operations, it emulates in particular software and the
status codes returned.

Methods of Storage Virtualization

i. File-based Storage Virtualization

This type of virtualization is used for a specific purpose and can apply to network-
attached storage (NAS) system.

File-based storage virtualization in Cloud Computing utilizes server message block


or network file system protocols and with its help of it breaks the dependency in a
normal network attached storage array.

This is done between the data being accessed and the location of the physical
memory. It also provides a benefit of better handling file migration in the
background which improves the performance.

ii. Block-based Virtual Storage

The Block based virtual storage is more widely used than the virtual storage
system as the virtual storage system is sometimes used for a specific purpose. The
block-based virtual storage system uses logical storage such as drive partition from
the physical memory in a storage device.

It also abstracts the logical storage such as a hard disk drive or any solid state
memory device. This also allows the virtualization management software to get
familiar with the capacity of the available device and split them into shared
resources to assign.

Advantages of Storage Virtualization

i. Easy Retrieval and Upload of Data

In storage virtualization, the data quickly retrieve from virtual storage. It is as easy
as accessing a file on the local computer. The data store very easily with the help
of some application and an internet connection which is an easy task.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 7


ii. Better Management

The data can be migrated based upon the utilization such as the data which is
frequently used can be stored on a high-performance storage system. However, the
data which is rarely used can be placed on a bit slower system.

iii. Security

In storage virtualization, the data stores in different place and secure with
maximum security. If any disaster takes place the data can be retrieved from some
other place and it won’t affect the customer.

The security has the ability to meet the real utilization necessities rather than
providing additional storage.

Application Virtualization

Application virtualization software allows users to access and use an application


from a separate computer than the one on which the application is installed. Using
application virtualization software, IT admins can set up remote applications on a
server and deliver the apps to an end user’s computer. For the user, the experience
of the virtualized app is the same as using the installed app on a physical machine.

Benefits of Application Virtualization

These server images facilitate regulatory compliance with standards such as the
Payment Card Industry Data Security Standards (PCI DSS) and the Health
Insurance Portability and Accountability Act (HIPAA). Since data is not processed
or stored on endpoint devices, no data breach occurs, should the device become
compromised. The endpoint device is but a display terminal.

Application and desktop virtualization both support incident management,


resolving many adverse desktop events by merely refreshing a virtualized image,
and restoring the desktop environment to its previous state.

Other virtualization benefits include:

 Allows the running of legacy apps (e.g., those developed for end-of-life OS
platforms like Windows 7 and XP).

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 8


 Enables cross-platform operations (e.g., running Windows apps on iOS,
Android, macOS, and Chrome OS).
 Prevents conflicts with other virtualized apps (e.g., conflicting anti-malware
software).
 Permits users to run multiple app instances—if not virtualized, many apps

Clusters

At a high level, a computer cluster is a group of two or more computers, or nodes,


that run in parallel to achieve a common goal. This allows workloads consisting of
a high number of individual, parallelizable tasks to be distributed among the nodes
in the cluster.

Virtual clusters and Resource Management

Virtual Cluster

 As with traditional physical servers, virtual machines (VMs) can also be


clustered. A VM cluster starts with two or more physical servers;
 We'll call them Server A and Server B.
 In simple deployments if Server A fails, its workloads restart on Server B
 With many VMs, an inefficient configuration always causes problems with
overloading or underutilization.
 XenServer and VMware ESXi Server support a bridging mode which allows
all domains to appear on the network as individual hosts.
 With this mode VMs can communicate with one another freely through the
virtual network interface card and configure the network automatically.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 9


Virtual Clusters Management

• It is necessary to effectively manage VMs running on virtual clusters and


consequently build a high-performance virtualized computing environment

• This involves

 virtual cluster deployment,


 monitoring and management over large-scale clusters, resource scheduling,
load balancing,
 server consolidation, fault tolerance, and other techniques

• Apart from it there are common installations for most users or applications, such
as OS or user-level programming libraries.

• These software packages can be preinstalled as templates (called template VMs).

Deployment

• There are four steps to deploy a group of VMs onto a target cluster:

 preparing the disk image,

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 10


 configuring the VMs,
 choosing the destination nodes, and
 Executing the VM deployment command on every host.

Many systems use templates to simplify the disk image preparation process.

• A template is a disk image that includes a preinstalled operating system with or


without certain application software.

• Templates could implement the COW (Copy on Write) format. A new COW
backup file is very small and easy to create and transfer.

• Therefore, it definitely reduces disk space consumption.

• VM is configured with a name, disk image, network setting, and allocated CPU
and memory.

• One needs to record each VM configuration into a file. However, this method is
inefficient when managing a large group of VMs

VMs with the same configurations could use pre-edited profiles to simplify the
process. In this scenario, the system configures the VMs according to the chosen
profile.

• Most configuration items use the same settings, while other items, such as UUID,
VM name, and IP address, are assigned with automatically calculated values

Containers vs. Virtual Machines

Containers and virtual machines are very similar resource virtualization


technologies. Virtualization is the process in which a system singular resource like
RAM, CPU, Disk, or Networking can be ‘virtualized’ and represented as multiple
resources. The key differentiator between containers and virtual machines is that
virtual machines virtualize an entire machine down to the hardware layers and
containers only virtualize software layers above the operating system level.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 11


Container

Containers are lightweight software packages that contain all the dependencies
required to execute the contained software application. These dependencies include
things like system libraries, external third-party code packages, and other operating
system level applications. The dependencies included in a container exist in stack
levels that are higher than the operating system.

Pros

• Iteration speed

Because containers are lightweight and only include high level software, they are
very fast to modify and iterate on.

• Robust ecosystem

Most container runtime systems offer a hosted public repository of pre-made


containers. These container repositories contain many popular software
applications like databases or messaging systems and can be instantly downloaded
and executed, saving time for development teams

Cons

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 12


• Shared host exploits

Containers all share the same underlying hardware system below the operating
system layer, it is possible that an exploit in one container could break out of the
container and affect the shared hardware. Most popular container runtimes have
public repositories of pre-built containers. There is a security risk in using one of
these public images as they may contain exploits or may be vulnerable to being
hijacked by nefarious actors.

Virtual machine

Pros

• Full isolation security

Virtual machines run in isolation as a fully standalone system. This means that
virtual machines are immune to any exploits or interference from other virtual
machines on a shared host. An individual virtual machine can still be hijacked by
an exploit but the exploited virtual machine will be isolated and unable to
contaminate any other neighboring virtual machines.

• Interactive development

Containers are usually static definitions of the expected dependencies and


configuration needed to run the container. Virtual machines are more dynamic and
can be interactively developed. Once the basic hardware definition is specified for
a virtual machine the virtual machine can then be treated as a bare bones computer.
Software can manually be installed to the virtual machine and the virtual machine
can be snapshotted to capture the current configuration state. The virtual machine
snapshots can be used to restore the virtual machine to that point in time or spin up
additional virtual machines with that configuration.

Cons

• Iteration speed

Virtual machines are time consuming to build and regenerate because they
encompass a full stack system. Any modifications to a virtual machine snapshot
can take significant time to regenerate and validate they behave as expected.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 13


• Storage size cost

Virtual machines can take up a lot of storage space. They can quickly grow to
several gigabytes in size. This can lead to disk space shortage issues on the virtual
machines host machine.

Introduction to Docker

Docker:

Docker is a set of platforms as a service (PaaS) products that use the Operating
system level visualization to deliver software in packages called containers.
Containers are isolated from one another and bundle their own software, libraries,
and configuration files; they can communicate with each other through well-
defined channels. All containers are run by a single operating system kernel and
therefore use fewer resources than a virtual machine.It is a tool that is used to
automate the deployment of applications in an environment designed to manage
containers.

Important Terminologies in Docker

1. Docker Image

• It is a file, comprised of multiple layers, used to execute code in a Docker


container.

• They are a set of instructions used to create docker containers.

2. Docker Container

• It is a runtime instance of an image.

• Allows developers to package applications with all parts needed such as


libraries and other dependencies.

3. Docker file

• It is a text document that contains necessary commands which on execution


helps assemble a Docker Image.

• Docker image is created using a Docker file.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 14


4. Docker Engine

• The software that hosts the containers is named Docker Engine.

• Docker Engine is a client-server based application

• The docker engine has 3 main components:

• Server: It is responsible for creating and managing Docker images,


containers, networks, and volumes on the Docker. It is referred to as a daemon
process.

• REST API: It specifies how the applications can interact with the Server and
instructs it what to do.

• Client: The Client is a docker command-line interface (CLI), that allows us


to interact with Docker using the docker commands.

5. Docker Hub

• Docker Hub is the official online repository where you can find other
Docker Images that are available for use.

• It makes it easy to find, manage, and share container images with others.

Docker Components

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 15


Docker Client

The Docker client enables users to interact with Docker.

Docker runs in a client-server architecture that means docker client can connect to
the docker host locally or remotely. Docker client and host (daemon) can run on
the same host or can run on different hosts and communicate through sockets or a
RESTful API.

The Docker client is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to
docker daemon, which carries them out. The docker command uses the Docker
API. The Docker client can communicate with more than one daemon.

We can communicate with the docker client using the Docker CLI. We have some
commands through which we can communicate the Docker client. Then the docker
client passes those commands to the Docker daemon.

DockerHost

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 16


The Docker host provides a complete environment to execute and run applications.
It includes Docker daemon, Images, Containers, Networks, and Storage.

Docker Daemon

Docker Daemon is a persistent background process that manages Docker images,


containers, networks, and storage volumes. The Docker daemon constantly listens
for Docker API requests and processes them.

When you use a docker run command to start up a container. Your docker client
will translate that command into an HTTP API call, sends it to docker daemon.
Docker daemon then evaluates the request, talks to underlying os and provisions
your container.

The daemon is responsible for all container-related actions and receives commands
via the CLI or the REST API. It can also communicate with other daemons to
manage its services. The Docker daemon pulls and builds container images as
requested by the client.

Docker Images:

Docker-images are a read-only binary template used to build containers. Images


also contain metadata that describe the container’s capabilities and needs.

Create a docker image using the docker build command. whenever you pass a
Dockerfile to the docker build command then the docker daemon will create a
docker image according to the Dockerfile instruction.

Run the docker images using the docker run command. whenever we pass the
command to docker client then the docker client passes this command to the
docker daemon then docker daemon will create the container for that image.

Push the docker image to the public registry like DockerHub using the docker push
command after pushed you can access these images from anywhere using docker
pull command.

An image can be used to build a container. Container images can be shared across
teams within an enterprise using a private container registry, or shared with the
world using a public registry like Docker Hub.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 17


Docker Containers:

A container is a runnable instance of an image. You can create, start, stop, move,
or delete a container using the Docker API or CLI. You can connect a container to
one or more networks, attach storage to it, or even create a new image based on its
current state.

Containers provide you with a lightweight and platform-independent way of


running your applications. Every container is isolated but access to resources on
another host or container can be allowed with the help of docker networking.

A container is volatile it means whenever you remove or kill the container then all
of its data will be lost from it. If you want to persist the container data use the
docker storage concept.

Containers only have access to resources that are defined in the image, unless
additional access is defined when building the image into a container.

Docker Networking

Through the docker networking, we can communicate one container to other


containers.

By default, you get three different networks on the installation of Docker – none,
bridge, and host. The none and host networks are part of the network stack in
Docker. The bridge network automatically creates a gateway and IP subnet and all
containers that belong to this network can talk to each other via IP addressing.

The other type of networks is user-defined networks. Administrators can configure


multiple user-defined networks. There are three types:

• Bridge network: a user-defined Bridge network differs in that there is no


need for port forwarding for containers within the network to communicate with
each other.

• Overlay network: An Overlay network is used when you need containers on


separate hosts to be able to communicate with each other, as in the case of a
distributed network.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 18


• Macvlan network: When using Bridge and Overlay networks a bridge
resides between the container and the host. A Macvlan network removes this
bridge, providing the benefit of exposing container resources to external networks
without dealing with port forwarding. This is realized by using MAC addresses
instead of IP addresses.

Docker Storage

A container is volatile it means whenever you remove or kill the container then all
of its data will be lost from it. If you want to persist the container data use the
docker storage concept.

You can store data within the writable layer of a container but it requires a storage
driver. In terms of persistent storage, Docker offers the following options:

• Data Volumes: Data Volumes provide the ability to create persistent storage,
with the ability to rename volumes, list volumes, and also list the container that is
associated with the volume. Data Volumes sit on the host file system, outside the
containers copy on write mechanism and are fairly efficient.

• Data-Volume Container: A Data Volume Container is an alternative


approach wherein a dedicated container hosts a volume and to mount that volume
to other containers. In this case, the volume container is independent of the
application container and therefore can be shared across more than one container.

• Bind Mounts: Another option is to mount a host’s local directory into a


container. In the previously mentioned cases, the volumes would have to be within
the Docker volumes folder, whereas when it comes to Directory Mounts any
directory on the Host machine can be used as a source for the volume.

Docker Registries

Docker-registries are services that provide locations from where you can store and
download images.

In other words, a Docker registry contains repositories that host one or more
Docker Images.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 19


Public Registries include Docker Hub and Docker Cloud and private Registries can
also be used. You can also create your own private registry

Docker Images and Repositories

It's the world's largest repository of container images with an array of content
sources including container community developers, open source projects and
independent software vendors (ISV) building and distributing their code in
containers.

VIT/AIDS/CCS335/CLOUD COMPUTING/UNIT III Page 20

You might also like